Nature of Intelligence – Episode Two – Language and Thought

In the previous post we looked at the question of What is Intelligence? This was based on the first episode of the Santa Fe Institute’s Complexity podcast series. In the second episode they looked into the relationship between language and thought. I had always assumed that language was the tool we used to express our thinking. And while that’s true, the full story is more complex than that.

In the opening Melanie states, “Language is the backbone of human culture.” But that statement is soon followed by the questions, “Are humans intelligent because we have language, or do we have language because we’re intelligent? How do language and thinking interact? And can one exist without the other?”

Like I said, the relationship is not so simple. Here are a few passages to prime your consciousness before you get into hearing the podcast.

At one point Gary Lupyan explains that, even with a very social & collaborative species like humans, if we take away language, we take away the major tool for creating culture and for transmitting culture. What he doesn’t say explicitly, but what I feel is inherently evident in that statement, is that culture is created and transmitted by way of how humans use language to tell stories.

While recapping the episode about mid-way, Melanie states that, “…language is an incredible tool for collaboration, and collaboration drives our intelligence.” It’s an interesting observation, as collaboration involves both doing things together, as well as sharing information by way of stories. But she also reminds us “…that language makes it easy to lie and to trick people.”

As the world deals with an avalanche of lies and misinformation from China, the U.S., and Russia, it’s a time for reflecting on the stark reality that intelligence and language don’t require a moral foundation. We can tell whatever story we want.


Transcript

Spoken and written language is completely unique to the human species, and it’s part of how we evolved. It’s the backbone of our societies, one of the primary ways we judge others’ intellect. So, are humans intelligent because we have language, or do we have language because we’re intelligent? How do language and thinking interact? And can one exist without the other? Guests: Ev Fedorenko, Steve Piantadosi, and Gary Lupyan

Ev Fedorenko: It is absolutely the case that not having access to language has devastating effects, but it doesn’t seem to be the case that you fundamentally cannot learn certain kinds of complex things.

Abha Eli Phoboo: From the Santa Fe Institute, this is Complexity.

Melanie Mitchell: I’m Melanie Mitchell.

Abha: And I’m Abha Eli Phoboo.

Melanie: Think about this podcast that you’re listening to right now. You’re, hopefully, learning by just listening to us talk to you. And the fact that you can take in new information this way, through what basically comes down to sophisticated vocal sounds, is pretty astonishing.

In our last episode, we talked about how one of the major ways humans learn is by being in the world and interacting with it. But we also use language to share information and ideas with each other without needing firsthand experience. Language is the backbone of human culture.

Abha: It’s hard to imagine where we’d be without it. If you’ve ever visited a country where you don’t speak the language, you know how disorienting it is to be cut off from basic communication. So in today’s episode, we’re going to look at the role language plays in intelligence. And the voices you’ll hear were recorded remotely across different countries, cities and work spaces.

Melanie: Are humans intelligent because we have language, or do we have language because we’re intelligent? How do language and thinking interact? And can one exist without the other?

Melanie: Part One: Why do humans have language?

Melanie: Across the animal kingdom, there are no other species that communicate with anything like human language.

Abha: This isn’t to say that animals aren’t communicating in sophisticated ways, and a lot of that sophistication goes unnoticed.

Melanie: But the way humans talk — with our long conversations and complex syntax — is completely unique. And it’s part of how we evolved.

Abha: For several decades, a dominant theory of human language was something called generative linguistics, or generative grammar.

Melanie: The linguist Noam Chomsky made this idea popular, and it basically goes like this: there’s an inherent, underlying structure of rules that all languages follow. And from birth, we have a hard-wired bias toward language as opposed to other forms of communication — we’re biologically predisposed to language and these syntactic rules. This is why human language is, according to Chomsky, unique to our species and universal across different cultures.

Abha: This theory has been incredibly influential. But it turns out, it doesn’t seem to be right.

Gary Lupyan: So I’ve never been a fan of generative linguistics, Chomsky’s kind of core arguments about universal grammar or the need for innate grammatical knowledge.

Abha: This is Gary Lupyan.

Gary: I am Gary Lupyan, professor of psychology at the University of Wisconsin-Madison. I’m a cognitive scientist. I study the evolution of language, the effects of language on cognition, on perception, and over the last few years trying to make sense of large language models like lots of other people.

Melanie: In recent years, the development of large language models has bolstered Gary’s dislike of generative grammar. The old thinking was that in order to use language well, you needed to be biologically wired to know these language rules from the start. But LLMs aren’t programmed with any grammatical rules baked into them. And yet, they spit out incredibly coherent writing.

Gary: Even before these large language models, there were plenty of arguments against that view. I think these are the last nails in the coffin. So I think producing, correct, grammatically sophisticated, even I’d argue semantically coherent language. These models can do all that even without, by modern standards, huge amounts of training. It shows that in principle, one does not need any of this type of innate grammatical knowledge.

Abha: So, what’s going on here? Steve Piantadosi is a psychology and neuroscience professor at UC Berkeley, studying how children learn language and math. He says that language does have rules, but those rules are emergent. They’re not there from the start.

Steve Piantadosi: I think that the key difference is that Chomsky and maybe mainstream linguistics tends to state its theories already at the high level of abstraction. They say, here are the rules that I think this system is following. Whereas in a large language model, when you go to build one, you don’t tell it the high level rules about how language works. You tell it the low level rules about how to learn and how to construct its own internal configurations. And you tell it that it should do that in a way that predicts language well. And when you do that, it kind of configures itself in some way.

Melanie: What’s an example of a high level rule?

Steve: For example, a high level rule in English is if you have a sentence, you can put it inside of another sentence with the word that. So I could say, “I drank coffee today.” That’s a whole sentence. And I could say, “John believed that I drank coffee today.” And because that rule is about how to make a sentence out of another sentence, you can actually do it again.

So I can say, “Mary doubted that John believed that I drank coffee today.” And if you were going to sit down and write a grammar of English, if you’re going to try to describe what the grammatical and ungrammatical sentences of English were, you’d have to have some kind of rule that said that, right? Because any English speaker you ask is going to tell you that, “John said that I drank coffee today.” is an acceptable English sentence.

And also “I drank coffee today.” is an acceptable English sentence. Large language models, when they’re built, don’t know anything like that rule. They’re just a mess of parameters and weights and connections, and they have to be exposed to enough English in order to figure out that rule.

And I’m pretty sure ChatGPT knows that rule, right? Because it can form sentences that have an embedded sentence in that way. So when you make ChatGPT, you don’t tell it that rule from the start, it has to construct it and discover it.

And I think what’s kind of interesting is that building a system like ChatGPT that can discover that rule doesn’t negate the existence of that rule in English speakers’ minds. So internally in ChatGPT somewhere, there has to be some kind of realization of that rule or something like it.

So the hope for these other theories, I think, or at least these other kind of basic observations about language, is that they will be realized in some way inside the internal configurations that these models arrive at.

I think it’s not quite that simple because the large language models are much better than our theories. So we don’t have any kind of rule-based account of anything that comes close to what they can do. But they have to have something like that because they exhibit that behavior.

Abha: And we should say, these rules we’re talking about are not the same as the quote-unquote “rules” you learn in school, like when your teacher tells you how to use prepositions or, “don’t split an infinitive.”

Steve: Yeah, sorry, let me just clarify. In linguistics or in cognitive science, when people talk about rules like this, they don’t mean the rules like don’t split infinitives. Basically anything you heard from an English teacher, you should just completely ignore in cognitive science and linguistics. It’s just made up. I mean, it’s literally made up, often just to reinforce class distinctions and things.

The kinds of rules that linguistics and cognitive science are interested in are ones which are descriptive, that talk about how people actually do speak. People do split infinitives, right, and they do end sentences with prepositions and, you know, pretty much any rule you’ve ever heard from an English teacher, they had to tell you because it’s going against how you naturally speak.

So that’s just some weird class thing, I think, that’s going on. And what we’re interested in are the kind of descriptive rules of how the system is kind of actually functioning in nature. And in that case, most people are just not even aware of the rules.

Melanie: Apologies to all the English teachers out there.

Abha: But to recap, language does have innate rules, like the “that” rule that Steve described, but we’re not born with these rules already hardwired into our brains. And the rules that linguists have documented so far aren’t as complete and precise as the actual rules that exist — the statistical patterns that ChatGPT has probably figured out and encoded at some point during its training period.

Melanie: Yet, none of this explains why we humans are using complex language, but other animals aren’t. I asked Gary what he thought about this.

Melanie: So there’s a lot of debate about the role language plays in intelligence. Is language a cause of or a result of humans’ superiority over other animals in certain kinds of cognitive capacities?

Gary: I think language is one of the major reasons why human intelligence is what it is. So more the cause than the result. There is something, obviously, in our lineage that makes us predisposed to language. I happen to think that what that is has much more to do with the kind of drive to share information, to socialize, than anything language specific or grammar specific.

And you see that in infants, infants want to engage. They want to share information, not just use language in an instrumental way. So it gives us access to information that we otherwise wouldn’t have access to.

And then it’s a hugely powerful tool for collaboration. You can make plans, you can ask one another to help. You can divide tasks in much more effective ways. And so without language, even if you take a very social, collaborative species like humans, you take away language and you take away the major tool for creating culture and for transmitting culture.

Melanie: Just to follow up, chimps and bonobos are very social species and have a lot of communication within their groups. Why didn’t they develop this drive you’re talking about for language? Why did we develop it and not them?

Gary: It’s only useful to a particular kind of species, a particular type of niche. So it has a really big startup cost. So kids have to learn this stuff. Their language is kind of useless to them before they put in the years that it takes to learn it. It’s also, and many have written on this, language is also very easy to lie with.

So it’s an unreliable system. Words are cheap. And so, reliance on language sort of only makes sense in a society that already has a kind of base level of trust. And so, I think the key to understanding the emergence of language is understanding the emergence of that type of prosociality that language then feeds back on and helps accelerate, but it needs to be there.

And so if you look at other primate societies, there is cooperation within kin groups. There is not broad scale cooperation. There is often aggression. There’s not sharing. So language just doesn’t make sense.

Abha: As Gary mentioned, there’s a huge startup cost for learning language. Humans have much longer childhoods than other species.

Ev: Ever since we’re born, we start paying attention to all sorts of regularities in the inputs we get, including in linguistic inputs.

Abha: This is Ev Fedorenko. Ev’s a neuroscientist at MIT, and she’s been studying language for the past two decades. As she mentioned, we start learning language from day one. That learning includes internalizing the structure and patterns that linguists used to assume were innate.

Ev: We start by paying attention to how sounds may go together to form kind of regular patterns like syllables and various transitions that are maybe more or less common. Pay attention to that. Then we figure out that some parts of that input correspond to meanings.

The example I often say is like every time mama says cat, there’s this fuzzy thing around, maybe it’s not random, right? And you kind of start linking parts of the linguistic input to parts of the world. And then of course you learn what are the rules for how you put words together to express more complex ideas.

So all of that knowledge seems to be stored in what I call the language system. And those representations are accessed both when I understand what somebody else is saying to me, because I have to map, I have to use this form of meaning mapping system to decode your messages, and when I have some abstract thing in my mind, an idea, and I’m trying to express it for someone else using this shared code, which in this case is English, right?

Abha: And often, we learn this shared code by interacting with our surroundings. Like, as Ev described, learning about a cat if there’s a cat in the room with you.

Melanie: But, you could also learn about cats without being able to interact with one. Someone could tell you about a cat, and you could start to create an idea for this thing called, “cat,” which you’ve never seen, but you know that it has pointy ears, it’s furry, and it makes a low rumbling sound when it’s content. That’s the power of language. Here’s Gary again.

Gary: So much of what we learn, and it’s very difficult to quantify, to put a number on, like what percent of what we know we’ve learned from talking to others, from reading. Most of formal education takes that role, right? It would not be possible in the absence of certainly not without language, but even without written language. If you have enough language training, you can just kind of map onto the visual world.

And we’ve done my lab, some work connecting it to, previously collected data from people who are born congenitally blind and the various things that they surprisingly learn about the visual world that one would think is only learnable through direct experience showing that well, normally sighted people might be learning it through direct experience, but a lot of that information is embedded in the structure of language.

Abha: And when we learn through language, we’re not just learning about physical objects. Language gives us the ability to name abstract concepts and categories, too. For instance, if you think about what the word “shoe” means, it refers to a type of object, but not one specific thing.

Steve: We wrote a paper about this and gave the example of shoes that were made out of eggplant skins. You could imagine doing that, drying out an eggplant skin and sewing up the sides and adding laces and fitting it around your feet and whatever. And you’ve probably never encountered shoes made out of eggplants before, but we all just agreed that that could happen. That you could find them.

And so that tells you that it’s not the physical object exactly that’s defining what the concept means. Because I just gave you a new physical object. It has to be something more abstract, more about the relationships and the use of it that defines what the thing is. I don’t think it’s so crazy to think that, you know, language is special in some way.

There’s certainly lots of things that we acquire through language. Right, this is, I think, especially salient if you talk to a kid and they’re asking why questions and you explain things that are abstract and that you can’t show them just in language and they can come to pretty good understandings of systems that they’ve never encountered before, you know, if they ask how clouds form or, you know, what the moon is doing or whatever, right? All of those are things that we learn about through a linguistic system.

So the right picture might be one where there’s a small kind of continuous or quantitative change in memory capacity that enables language, but then once you have language that opens up this kind of huge learning potential for cultural transmission of ideas and learning complicated kinds of things from your parents and from other people in your community.

Melanie: So Abha, we asked at the beginning of the episode why humans have language. And what we’ve heard from Gary, Steve, and Ev so far is that language probably emerged as a result of humans’ drive to socialize and to collaborate. And there’s a feedback effect between these social drives and language itself. So language is an incredible tool for collaboration, and collaboration drives our intelligence. Gary, for example, thinks that language is a major cause of human intelligence being what it is.

Abha: Right, right. It was interesting how Steve also pointed out that language enables a whole new way of learning and of cultural evolution. Language allows us to quickly learn new things, you know, from the people around us, say our parents, our friends, and other people we interact with.

It also lets us learn without having to experience something ourselves. Say, for example, when we are walking with our parents when we were little and they said, you know, “Don’t jump out in front of the car.” We tend to trust them and not have to experience it ourselves. And this is enabled because of language, right?

Melanie: Yeah, we should definitely appreciate our parents more. But on the downside, Gary also pointed out that language makes it easy to lie and to trick people. So relying on language only makes sense when society has a basic level of trust.

Abha: That is so true. I mean, if we don’t trust each other, it’s hard to function as a society, but trust comes at such a high cost too. And the other downside of language, you know, requires a long learning period because we can’t learn a language overnight. We’re not born necessarily speaking a language. Our childhood is so prolonged and that’s another high cost.

Melanie: Yeah. So the advantages of language must have outweighed those downsides in evolution.

Abha: Yes. Another interesting point that just came up is that today’s large language models have shown that certain linguistic theories are just wrong. Steve claims that LLMs have disproven Noam Chomsky’s notion of an innate universal grammar in the brain, right?

Melanie: Yeah, people have really changed their thinking about how language works in the brain. In part two, we’ll look at what brain imaging can tell us about language and what happens when people lose their language abilities.

Abha: Part Two: Are language and thought separate in the brain?

Abha: One of Ev’s signature methods is using fMRI brain scans to examine which systems in the brain light up when we use language. She and her collaborators have developed experiments to investigate the relationship between language and other forms of cognition.

Ev: It’s very simple. I mean, the logic of the experiments where we’ve looked at the relationship between language and thought is all pretty much the same, just using different kinds of thought. But the idea is you take individuals, put them in an fMRI scanner, and you have them do a task that you know reliably engages your language regions.

Abha: This could be, for example, reading or listening to coherent sentences while your brain is being scanned. Then, that map would be compared to the regions that light up when you hear sequences of random words and sounds that sound speech-like, but are completely nonsensical.

Ev: And if you guys visit MIT, I can scan you and print you a map of your language system. It takes about five minutes to find. Very reliable. And again, if I scan you today or 10 years later, I’ve done this on some people 10 years apart, it’s in exactly the same place. It’s very reliable within people. It’s very robust, so we find those language regions. And then we basically ask, okay, let’s have you engage in some form of thinking.

Maybe have you solve some math problems, or do something like some kind of pattern recognition test, and we basically ask, do circuits that light up when you process language overlap with the circuits that are active when, for example, when you engage in mathematical reasoning, like doing addition problems or whatnot. And we basically very consistently find across many domains of thought pretty much everything we’ve looked at so far, we find that the language regions are not really active, hardly at all, and some other system non-overlapping with the language regions is working really hard. So it’s not the case that we engage the language mechanisms to solve these other problems.

Melanie: I know there’s been some controversy about how easy it is to interpret the results of fMRI. What can you tell us, is that a hard thing to do? Is it an easy thing to do?

Ev: I don’t think there’s any particular challenge in interpreting fMRI data than any other data. I mean, you want to do robust and rigorous research. Before you make a strong claim based on whatever findings, you want to make sure that your findings tell you what you think they are, but that’s kind of a challenge for any research.

I don’t think it’s related to particular measurements you’re taking. I mean, there are certainly limitations of fMRI, and one of them is that we can’t look at fast time scales of information processing. We just don’t have access to what’s happening on a millisecond or tens of milliseconds or even hundreds of milliseconds time scale, which for some questions, it doesn’t matter, but for some questions, it really does. And so that makes fMRI not well suited for those questions where it matters. But in general, good robust findings from fMRI are very robustly replicable.

Steve: I’ve been actually very convinced by Ev’s arguments in particular.

Abha: That’s Steve Piantadosi again.

Steve: You can find people who are experts in some domain, like mathematics experts or chess grandmasters or whatever, who have lost linguistic abilities. And that is a very nice type of natural experiment that shows you that the linguistic abilities aren’t the kind of substrate for reasoning in those domains, because you can lose the linguistic abilities and still have the reasoning abilities.

There might still be a learning story. It would probably be very hard to learn chess or learn mathematics without having language. But I think that once you learn it, or learn it well enough to become an expert, it seems like there’s some other kind of system or some other kind of processing that happens non-linguistically. What it shows you is that you can be really good at language without having the ability to do the kind of sequential, multi-step reasoning that seems to characterize human thinking.

And that I think is surprising. It didn’t have to be like that. It could have been that language was the substrate that we used for everything or that language was such a difficult problem that if you solved language, you would necessarily have to have all of the underlying kind of reasoning machinery that people have. But it seems that that’s not right, that you can do quite a bit in language without having much reasoning.

Abha: And on the flipside, you can do a lot of reasoning without language. As Ev mentioned before, she and her collaborators have identified language systems in the brain that show up very reliably in fMRI scans. These language systems are mostly in the left hemisphere. So, what happens if someone loses these systems completely?

Ev: This fMRI approach is very nicely complemented by investigations of patients with severe language problems, right? So another approach, this one we’ve had around for much longer than fMRI, is to take individuals who have sustained severe damage to the language system, and sometimes left hemisphere strokes are large and they pretty much wipe out that whole system.

So these are so-called individuals with global aphasia. They can’t, if you give them a sentence, they cannot infer any meaning from this. And we know it’s not a low level deficit, because you can establish that it’s across modalities, like written and spoken, and so on. So it seems like the linguistic representations that they’ve set up for meaning mapping, that they’ve spent their lifetime learning, is lost, is really destroyed. And then you can ask about the cognitive capacities in these individuals. Can they still think complex thoughts?

And how do you test this? Well, you give them behavioral tasks. And for some of them, of course, you have to be a very clever experimentalist because you can no longer explain things verbally. But people come up with ways to get instructions across. They understand kind of thumbs up, thumbs down judgments.

So you give them well-formed or ill-formed mathematical expressions or musical patterns or something like that. And what you find is that, there are some individuals who are severely linguistically impaired — the language system is gone for as best as we can test it with whatever tools we have, and yet, they’re okay cognitively. They just lost that code to take the sophistication of their inner minds and translate it into this shared representational format.

And a lot of these individuals are severely depressed because they’re taken to be mentally challenged, right? Because that’s how we often judge people, is by the way they talk. That’s why foreigners often suffer in this way too. Judgments are made about their intellectual capacities and otherwise and so on.

Anyway, a lot of these individuals seem to have the ability to think quite preserved, which suggests that at least in the adult brain, you can take that language system out once you’ve acquired that set of knowledge bits, right? You can take it out and it doesn’t seem to affect any of the thinking capacities that we’ve tested so far.

Melanie: So here’s an extremely naive question. So if language and thought are dissociated, at least in adults, why does it feel like when I’m thinking that I’m actually thinking in words and in language?

Ev: That’s a great question that comes up quite often, not naive at all. It’s a question about the inner voice. A lot of people have this percept that there is a voice in their heads talking. It’s a good question to which I don’t think we as a field have very clear answers yet about what it does, what mechanisms it relies on.

What we do know is that it’s not a universal phenomenon, which tells you that it cannot be a critical ingredient of complex thought because certainly a lot of people who say that they don’t have an inner voice, some of them are MIT professors and they’re like, “What are you talking about? You have a voice in your head? That’s not good. Have you seen a doctor?”

And it’s a very active area of research right now. A lot of people got interested in this. You may have heard about 10 years ago, there was a similar splash about aphantasia, this inability of some people to visually image things, so similar like how some people don’t know what you mean when you say you have an inner voice, some people cannot form mental images.

Like, you say “Imagine the house you lived in when you were a child,” and they’re like “Got nothing there.” You know, it’s blank, I just can’t form that mental image. I can describe it, I know facts about it, but I can’t form that mental image. And these kinds of things like inner voice mental imagery, those are very hard things to study with the methods that we currently have available.

Abha: Yeah, I think I was talking to someone who actually told me they don’t have an inner voice and they actually are left with a feeling, but they can’t necessarily describe the feeling. And so they don’t know how to put it into language when they have a thought.

Ev: That’s a very good point because my husband ,who doesn’t have an inner voice, often uses this as an argument. “If we were thinking in language, why is it sometimes so hard to explain what you think? You know you have this idea very clearly for yourself and you just have trouble formulating it.” That’s a good point.

Melanie: But, Gary sees the relationship between language and thought a bit differently. He doesn’t think they can be separated so neatly.

Gary. I think Ev and her lab are doing fabulous work and we agree on many things. This is one thing we don’t agree on.

Melanie: In Ev’s example, patients who have had strokes lost their language systems in the brain, but they could still do complex cognitive tasks. They didn’t lose their ability to think.

Gary: So it’s possible to find individuals with aphasia that have typical behavior. And so that shows that at least in some cases, one can find cases where language is not necessary. So there are two complications with this. One is that people tend to have aphasia due to a stroke that tends to happen in older age. And so they’ve had a lifetime of experience with language. And so, just because a task doesn’t light up the language network doesn’t mean the task does not rely on language.

It doesn’t mean that language has not played a role in basically setting up the brain that you have as an adult, such that you don’t need language in the moment, but you needed exposure to language to enable you to do the task in the first place.

Abha: We asked Ev what she made of this argument, that even if language isn’t necessary in the moment, it still plays a big role in developing your adult brain. But she doesn’t think it’s as important as Gary does. She refers to another population of people, which are individuals who are born deaf and aren’t taught sign language.

Ev: Unless there are other signers in the community, or unless they’re moved into an environment where they can interact with the signers, they often grow up not having input to language. Especially if they’re in an isolated community. Growing up they figure out some system called home sign, which is a very, very basic system.

And so you can ask whether these individuals are able to develop certain thinking capacities. And it is absolutely the case that having… not having access to language has devastating effects, right? You can’t build relationships in the same way. You can’t learn as easily. Of course, through language I can just tell you all sorts of things about the world. Most of the things you probably know, you learned through language, but it doesn’t seem to be the case that you fundamentally cannot learn certain kinds of complex things.

So there are examples of individuals like that who have been able to learn math. Okay, it takes them longer. If you don’t have somebody to tell you how to do differential equations you can figure it out from whatever ways you can. So it’s certainly the case that language is an incredibly useful tool. And presumably, the accumulation of knowledge over generations that has happened has allowed us to build the world we live in today. But it doesn’t undermine the separability of those language and thinking systems.

Abha: In a lot of areas, it seems that Gary, Steve, and Ev are on the same page. Language has helped humans achieve incredible things, and it’s a very useful tool.

Melanie: But where they seem to differ is on just how much language and thought influence each other, and in which direction the causal arrow is pointing: Does language make us intelligent, or is language is the result of our intelligence? Ev’s work shows that many types of tasks can be done without lighting up the language systems in the brain. When combined with examples from stroke patients and other research, she has reason to believe that language and cognition are largely separate things.

Abha: Gary, on the other hand, isn’t ready to dismiss the role of language so easily — it could still be crucial for developing adult cognition, and, generally speaking, some people might rely on it more than others.

Melanie: And Steve offers one more example of how language can make our learning more efficient, regardless of whether or not it’s strictly necessary.

Steve: So, if you’re an expert in any domain, you know a ton of words and vocabulary about that specific domain that non-experts don’t, right? That’s true in scientific domains if you’re a physicist versus a biologist, but it’s also true in non-scientific domains. People who sew know tons of sewing words and people who are coal miners know tons of coal mining words and I think that those words are, as we were discussing, real technologies. They’re real cultural innovations that are very useful.

That’s why people use those words, because they need to convey a specific meaning in a specific situation. And by having those words, we’re probably able to communicate more efficiently and more effectively about those specific domains. So I think that this kind of ability to create and then learn domain specific vocabularies is probably very important and probably allows us to think all kinds of thoughts that otherwise would be really, really complicated.

Imagine being in a situation where you don’t have the domain specific vocabulary and you have to describe everything, and it becomes very clunky and hard to talk about. That’s why in sciences, especially, we come up with terms, so it really enables us to do things that would be really hard otherwise.

Melanie: Steve isn’t saying that it’s impossible to learn specific skills without language, but from his perspective, it’s more difficult and less likely.

Abha: But Ev has a slightly different view.

Ev: There are cultures, for example, human cultures that don’t don’t have exact math. Like the Peter Ha or the Chimani, like some tribes in the Brazilian Amazon, they don’t have numbers because they don’t need numbers.There are people who will make a claim that they don’t have numbers because they don’t have words for numbers.

And I don’t understand how the logic goes in this direction. I think they don’t have words for numbers because they don’t have the need for numbers in their culture. So they don’t come up with a way to refer to those concepts. Then of course, there’s different stories for why numbers came about. One common story has to do with farming, right?

When you have to keep track of entities that are similar, like 200 cows, and you want to make sure you left with them and came back with whatever 15 cows. And then you figure out some counting system, typically using digits, right? A lot of cultures start with digits. Anyway, and then you come up with words. And once you have labels for words, of course you can then do more things. You can solve tasks that require you to hold onto those.

But it’s not like not having words prevents you from figuring out a system of thought and representation to keep track of that information. So I think the directionality is in a different way than some people have put it forward.

Abha: So Melanie, our question for the spot of the episode was about whether language and thought are separate in the brain. And Ev seems to have very compelling evidence that they’re separate.

Melanie: Yeah, her results with fMRI were really surprising to me.

Abha: Right? Me too. Both Steve and Ev stress that language makes communication between people very efficient, but point out that when people lose their language abilities, say because of a stroke or some other injury, it’s often the case that their thinking, that is their non-linguistic cognitive abilities, are largely unaffected.

Melanie: But Abha, Gary pushed back on this. He noted that people who have had strokes tend to be older with cognitive abilities that they’ve had for a long time. So Gary pointed out that maybe you need language to enable cognition in the first place. And his own research has shown that this is true to some extent.

Abha: I guess there are really two questions here. First, do language and cognition really need to be entangled in the brain during infancy and childhood when both linguistic and cognitive skills are still being formed? And the second is, are language and cognition separate in adults who have established language and cognitive abilities already?

Melanie: Exactly. Ev’s work addresses the latter question, but not the former. And Ev admits that the neuroscience and psychology of language have been contentious fields for a long time. Here’s Ev.

Ev: Language has always been a very controversial field where people have very strong biases and opinions. The best I can do is try to be open minded and just keep training people to do rigorous work and to think hard about even the fundamental assumptions in the field. Those should always be questioned. Everything should always be questioned.

Abha: So here’s another question: what does all of this mean for large language models? In theory, the skills LLMs have exhibited are the same skills that map onto the language systems in the brain. They have the formal competence of patterns and language rules. But, if their foundations are statistical patterns in language, how much thinking can they do now, and in the future? And how much have they learned already?

Murray Shanahan: I mean, people sometimes use the word, an alien intelligence. I prefer the word exotic. It’s a kind of exotic mind-like entity.

Melanie: That’s next time, on Complexity. Complexity is the official podcast of the Santa Fe Institute. This episode was produced by Katherine Moncure, and our theme song is by Mitch Mignano. Additional music from Blue Dot Sessions. I’m Melanie, thanks for listening.

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Nature of Intelligence – Episode One – What is Intelligence

I tend to think of storytelling as sitting at the intersection of four elements:

  • Consciousness — awareness of self, the environment, and our thoughts
  • Intelligence — ability to learn, understand, reason, and solve problems
  • Imagination — create mental images, ideas, or concepts beyond reality
  • Creativity — generate original ideas, solutions, and artistic expressions

They’re different terms, of course, yet you can see how they interact with each other. It’s also apparent that they’re involved in the process of creating stories. They’re so fundamental, in fact, that they go a long way towards describing what makes us human. But the funny thing is, science doesn’t know how to accurately define any of these concepts.

While thousands of hours have been spent seeking answers, and scientists can talk for days on end about their findings, it is still a mystery. Take Shakespeare, for example. How did he utilize these aspects of humanity to create something as magical as Hamlet? And if we can’t properly describe one of these elements, how do we explain how they work together? And extending beyond us mortals, will AI ever be able to replicate this magic?

So when I ran across the third season of Santa Fe Institute’s Complexity podcast, which is devoted to the exploration of Intelligence, I had to listen in, and if you’re interesting in how we create stories in our head, I recommend you do the same, as it looks at the concept of intelligence through a human lens, as well as from the lens of artificial intelligence.

17th Century Playwrite in England
There’s so much information in this first episode, but I wanted to share four quotes that intrigued me. First off is this notion of “common sense”. It seems simple, but again, it’s illusive to capture in words. How would you describe it?

Common sense gives us basic assumptions that help us move through the world and know what to do in new situations. But it gets more complicated when you try to define exactly what common sense is and how it’s acquired. ~ Melanie Mitchell

This notion of an equivalent phenomenon describes much of the human / AI debate, as there is a sense that a machine will never be human, but maybe it can be close enough.

I think there’s a difference between saying, can we reach human levels of intelligence when it comes to common sense, the way humans do it, versus can we end up with the equivalent phenomenon, without having to do it the way humans do it. ~ John Krakauer

This goes back to the reality that we don’t know what makes humans human, so how are we to compare a computer algorithm to what it means to be us?

I think it’s just again, a category mistake to say we’ll have something like artificial general intelligence, because we don’t have natural general intelligence. ~ Alison Gopnik

But we’re more than thinking animals. We have emotions. Fall in love, feel pain, express joy and sorrow. Or in this case, grief. Computers are learning how to simulate emotions such as grief, but is that even possible?

I don’t know what it would mean for a computer to feel grief. I just don’t know. I think we should respect the mystery. ~ John Krakauer

So here goes, take a listen to Episode 1 and see what you think. The transcript is below if you feel so inclined (as I did) to follow along. It’s some heady stuff.

Transcript

Alison Gopnik: It’s like asking, is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that’s not really the right question.

Abha Eli Phoboo: From the Santa Fe Institute, this is Complexity.

Melanie Mitchell: I’m Melanie Mitchell.

Abha: And I’m Abha Eli Phoboo.

Abha: Today’s episode kicks off a new season for the Complexity podcast, and with a new season comes a new theme. This fall, we’re exploring the nature and complexity of intelligence in six episodes — what it means, who has it, who doesn’t, and if machines that can beat us at our own games are as powerful as we think they are. The voices you’ll hear were recorded remotely across different locations, including countries, cities and work spaces. But first, I’d like you to meet our new co-host.

Melanie: My name is Melanie Mitchell. I’m a professor here at the Santa Fe Institute. I work on artificial intelligence and cognitive science. I’ve been interested in the nature of intelligence for decades. I want to understand how humans think and how we can get machines to be more intelligent, and what it all means.

Abha: Melanie, it’s such a pleasure to have you here. I truly can’t think of a better person to guide us through what, exactly, it means to call something intelligent. Melanie’s book, Artificial Intelligence: A Guide for Thinking Humans, is one of the top books on AI recommended by The New York Times. It’s a rational voice among all the AI hype in the media.

Melanie: And depending on whom you ask, artificial intelligence is either going to solve all humanity’s problems, or it’s going to kill us. When we interact with systems like Google Translate, or hear the buzz around self-driving cars, or wonder if ChatGPT actually understands human language, it can feel like AIis going to transform everything about the way we live. But before we get carried away making predictions about AI, it’s useful to take a step back. What does it mean to call anything intelligent, whether it’s a computer or an animal or a human child?

Abha: In this season, we’re going to hear from cognitive scientists, child development specialists, animal researchers, and AI experts to get a sense of what we humans are capable of and how AI models actually compare. And in the sixth episode, I’ll sit down with Melanie to talk about her research and her views on AI.

Melanie: To kick us off, we’re going to start with the broadest, most basic question: what really is intelligence, anyway? As many researchers know, the answer is more complicated than you might think.

Melanie: Part One: What is intelligence?

Alison: I’m Alison Gopnik. I’m a professor of psychology and affiliate professor of philosophy and a member of the Berkeley AI Research group. And I study how children manage to learn as much as they do, particularly in a sort of computational context. What kinds of computations are they performing in those little brains that let them be the best learners we know of in the universe?

Abha: Alison is also an external professor with the Santa Fe Institute, and she’s done extensive research on children and learning. When babies are born, they’re practically little blobs that can’t hold up their own heads. But as we all know, most babies become full-blown adults who can move, speak, and solve complex problems. From the time we enter this world, we’re trying to figure out what the heck is going on all around us, and that learning sets the foundation for human intelligence.

Alison: Yeah, so one of the things that is really, really important about the world is that some things make other things happen. So everything from thinking about the way the moon affects the tides to just the fact that I’m talking to you and that’s going to make you change your minds about things. Or the fact that I can pick up this cup and spill the water and everything will get wet. Those really basic cause and effect relationships are incredibly important.

And they’re important partly because they let us do things. So if I know that something is gonna cause a particular effect, what that means is if I wanna bring about that effect, I can actually go out in the world and do it. And it underpins everything from just our everyday ability to get around in the world, even for an infant, to the most incredible accomplishments of science. But at the same time, those causal relationships are kind of mysterious and always have been. How is it? After all, all we see is that one thing happens and another thing follows it. How do we figure out that causal structure?

Melanie: So how do we?

Alison: Yeah, good question. So that’s been a problem philosophers have thought about for centuries. And there’s basically two pieces. And anyone who’s done science will recognize these two pieces. We analyze statistics. So we look at what the dependencies are between one thing and another. And we do experiments. We go out, perhaps the most important way that we understand about causality is you do something and then you see what happens and then you do something again and you say, wait a minute, that happened again.

And part of what I’ve been doing recently, which has been really fun, is just look at babies, even like one year olds. And if you just sit and look at a one year old, mostly what they’re doing is doing experiments. I have a lovely video of my one-year-old grandson with a xylophone and a mallet.

Abha: Of course, we had to ask Alison to show us the video. Her grandson is sitting on the floor with the xylophone, while his grandfather plays an intricate song on the piano. Together, they make a strange duet.

And it’s not just that he makes the noise. He tries turning the mallet upside down. He tries with his hand a bit. That doesn’t make a noise. He tries with a stick end. That doesn’t make a noise. Then he tries it on one bar and it makes one noise. Another bar, it makes another noise. So when the babies are doing the experiments, we call it getting into everything. But I increasingly think that’s their greatest motivation.

Abha: So babies and children are doing these cause and effect experiments constantly, and that’s a major way that they learn. At the same time, they’re also figuring out how to move and use their bodies, developing a distinct intelligence in their motor systems so they can balance, walk, use their hands, turn their heads, and eventually, move in ways that don’t even require much thinking at all.

Melanie: One of the leading researchers on intelligence and physical movement is John Krakauer, a professor of neurology, neuroscience, physical medicine, and rehabilitation at the Johns Hopkins University School of Medicine. John’s also in the process of writing a book.

John Krakauer: I am. I’ve been writing it for much longer than I expected, but now I finally know the story I want to tell. I’ve been practicing it.

Melanie: Well, let me ask, I just want to mention that the subtitle is Thinking versus Intelligence in Animals, Machines and Humans. So I wanted to get your take on what is thinking and what is intelligence.

John: Oh my gosh, thanks Melanie for such an easy softball question.

Melanie: Well, you’re writing a book about it.

John: Well, yes, so… I think I was very inspired by two things. One was how much intelligent adaptive behavior your motor system has even when you’re not thinking about it. The example I always give is when you press an elevator button before you lift your arm to press the button, you contract your gastrocnemius in anticipation that your arm is sufficiently heavy, that if you didn’t do that, you’d fall over because your center of gravity has shifted. So there are countless examples of intelligent behaviors. In other words, they’re goal-directed and accomplish the goal below the level of overt deliberation or awareness.

And then there’s a whole field, what are called long latency stretch reflexes, these below the time of voluntary movement, but sufficiently flexible to be able to deal with quite a lot of variation in the environment and still get the goal accomplished, but it’s still involuntary.

Abha: There’s a lot that we can do without actually understanding what’s happening. Think about the muscles we use to swallow food, or balance on a bike, for example. Learning how to ride a bike takes a lot of effort, but once you’ve figured it out, it’s almost impossible to explain it to someone else.

John: And so it’s what, Daniel Dennett, you know, who recently passed away, but was very influential for me with what he called, competence with comprehension versus competence without comprehension. And, you know, I think he also was impressed by how much competence there is in the absence of comprehension. And yet along came this extra piece, the comprehension, which added to competence and greatly increased the repertoire of our competences.

Abha: Our bodies are competent in some ways, but when we use our minds to understand what’s going on, we can do even more. To go back to Alison’s example of her grandson playing with a xylophone, comprehension allows him, or anyone, playing with a xylophone mallet to learn that each side of it makes a different sound.

If you or I saw a xylophone for the first time, we would need to learn what a xylophone is, what a mallet is, how to hold it, and which end might make a noise if we knocked it against a musical bar. We’re aware of it. Over time we internalize these observations so that every time we see a xylophone mallet, we don’t need to think through what it is and what the mallet is supposed to do.

Melanie: And that brings us to another, crucial part of human intelligence: common sense. Common sense is knowing that you hold a mallet by the stick end and use the round part to make music. And if you see another instrument, like a marimba, you know that the mallet is going to work the same way. Common sense gives us basic assumptions that help us move through the world and know what to do in new situations. But it gets more complicated when you try to define exactly what common sense is and how it’s acquired.

John: Well, I mean, to me, common sense is the amalgam of stuff that you’re born with. So you, you know, any animal will know that if it steps over the edge, it’s going to fall. Right. What you’ve learned through experience that allows you to do quick inference.

So in other words, you know, an animal, it starts raining, it knows it has to find shelter. Right? So in other words, presumably it learns that you don’t want to be wet, and so it makes the inference it’s going to get wet, and then it finds a shelter. It’s a common sense thing to do in a way.

And then there’s the thought version of common sense. Right? It’s common sense that if you’re approaching a narrow alleyway, your car’s not gonna fit in it. Or if you go to a slightly less narrow one, your door won’t open when you open the door. Countless interactions between your physical experience, your innate repertoire, and a little bit of thinking. And it’s that fascinating mixture of fact and inference and deliberation. And then we seem to be able to do it over a vast number of situations, right?

In other words, we just seem to have a lot of facts, a lot of innate understanding of the physical world, and then we seem to be able to think with those facts. And those innate awarenesses. That, to me, is what common sense is. It’s this almost language-like flexibility of thinking with our facts and thinking with our innate sense of the physical world and combinatorially doing it all the time, thousands of times a day. I know that’s a bit waffly. I’m sure Melanie can do a much better job at me than that, but that’s how I see it.

Melanie: No, I think that’s actually a great exposition of what it means. I totally agree. I think it is fast inference about new situations that combines knowledge and sort of reasoning, fast reasoning, and a lot of very basic knowledge that’s not really written down anywhere that we happen to know because we exist in the physical world and we interact with it.

Melanie: So, observing cause and effect, developing motor reflexes, and strengthening common sense are all happening and overlapping as children get older.

Abha: And we’re going to cover one more type of intelligence that seems to be unique to humans, and that’s the drive to understand the world.

John: It turns out, for reasons that physicists have puzzled over, that the universe is understandable, explainable, and manipulatable. The side effect of understanding the world is understandable, is you begin to understand sunsets and why the sky is blue and how black holes work and why water is a liquid and then a gas. It turns out that these are things worth understanding because you can then manipulate and control the universe. And it’s obviously advantageous because humans have taken over entirely.

I have a fancy microphone that I can have a Zoom call with you with. An understandable world is a manipulable world. As I always say, an arctic fox trotting very well across the arctic tundra is not going, “hmm, what’s ice made out of?” It doesn’t care. Now we, for some point between chimpanzees and us, started to care about how the world worked. And it obviously was useful because we could do all sorts of things. Fire, shelter, blah blah blah.

Abha: And in addition to understanding the world, we can observe ourselves observing, a process known as metacognition. If we go back to the xylophone, metacognition is thinking, “I’m here, learning about this xylophone. I now have a new skill.”

And metacognition is what lets us explain what a xylophone is to other people, even if we don’t have an actual xylophone in front of us. Alison explains more.

Alison: So the things that I’ve been emphasizing are these kinds of external exploration and search capacities, like going out and doing experiments. But we know that people, including little kids, do what you might think of as sort of internal search. So they learn a lot, and now they just intrinsically, internally want to say, “what are some things, new conclusions I could draw, new ideas I could have based on what I already know?”

And that’s really different from just what are the statistical patterns in what I already know. And I think two capacities that are really important for that are metacognition and also one that Melanie’s looked at more than anyone else, which is analogy. So being able to say, okay, here’s all the things that I think, but how confident am I about that? Why do I think that? How could I use that learning to learn something new?

Or saying, here’s the things that I already know. Here’s an analogy that would be really different, right? So I know all about how water works. Let’s see, if I think about light, does it have waves the same way that water has waves? So actually learning by just thinking about what you already know.

John: I find myself constantly changing my position on the one hand, this human capacity to sort of look at yourself computing, a sort of meta-cognition, which is consciousness not just of the outside world and of your body, it’s consciousness of your processing of the outside world and your body. It’s almost as though you used consciousness to look inward at what you were doing. Humans have computations and feelings. They have a special type of feeling and computation which together is deliberative. And that’s what I think thinking is, it’s feeling your computations.

Melanie: What John is saying is that humans have conscious feelings — our sensations such as hunger or pain — and that our brains perform unconscious computations, like the muscle reflexes that happen when we press an elevator button. What he calls deliberative thought is when we have conscious feelings or awareness about our computations.

You might be solving a math problem and realize with dismay that you don’t know how to solve it. Or, you might get excited if you know exactly what trick will work. This is deliberative thought — having feelings about your internal computations. To John, the conscious and unconscious computations are both “intelligent,” but only the conscious computations count as “thinking”.

Abha: So Melanie, having listened to John and Alison, I’d like to go back to our original question with you. What do you think is intelligence?

Melanie: Well, let me recap some of what Alison and John said. Alison really emphasized the ability to learn about cause and effect.

What causes what in the world and how we can predict what’s going to happen. And she pointed out that the way we learn this, adults and especially kids, by doing little experiments, interacting with the world and seeing what happens and learning about cause and effect that way. She also stressed our ability to generalize, to make analogies, how situations might be similar to each other in an abstract way. And this underlies what we would call our common sense, that is our basic understanding of the world.

Abha: Yeah, that example of the xylophone and the mallet, that was very intriguing. As both John and Alison said, humans seem to have a unique drive to gain an understanding of the world via experiments like making mistakes, trying things out. And they both emphasize this important role of metacognition or reasoning about one’s own thinking. What do you think of that? You know, how important do you think metacognition is?

Melanie: It’s absolutely essential to human intelligence. It’s really what underlies, I think, our uniqueness. John, you know, made this distinction between intelligence and thinking. To him, you know, most of our, what he would call our intelligent behavior is unconscious. It doesn’t involve metacognition. He called it competence without comprehension. And he reserved the term thinking for conscious awareness of what he called one’s internal computations.

Abha: Even though John and Alison have given us some great insights about what makes us smart, I think both would admit that no one has come to a full, complete understanding of how human intelligence works, right?

Melanie: Yeah, we’re far from that. But in spite of that, big tech companies like OpenAI and DeepMind are spending huge amounts of money in an effort to make machines that, as they say, will match or exceed human intelligence. So how close are they to succeeding? Well, in part two, we’ll look at how systems like ChatGPT learn and whether or not they’re even intelligent at all.

Abha: Part two: How intelligent are today’s machines?

Abha: If you’ve been following the news around AI, you may have heard the acronym LLM, which stands for large language model. It’s the term that’s used to describe the technology behind systems like ChatGPT from OpenAI or Gemini from Google. LLMs are trained to find statistical correlations in language, using mountains of text and other data from the internet. In short, if you ask ChatGPT a question, it will give you an answer based on what it has calculated to be the most likely response, based on the vast amount of information it’s ingested.

Melanie: Humans learn by living in the world — we move around, we do little experiments, we build relationships, and we feel. LLMs don’t do any of this. But they do learn from language, which comes from humans and human experience, and they’re trained on a lot of it. So does this mean that LLMs could be considered to be intelligent? And how intelligent can they, or any form of AI, become?

Abha: Several tech companies have an explicit goal to achieve something called artificial general intelligence, or AGI. AGI has become a buzzword, and everyone defines it a bit differently. But, in short, AGI is a system that has human level intelligence. Now, this assumes that a computer, like a brain in a jar, can become just as smart, or even smarter, than a human with a feeling body. Melanie asked John what he thought about this.

Melanie: You know, I find it confusing when people like Demis Hassibis, who’s the founder, one of the co-founders of DeepMind, and he an interview that AGI is a system that should be able to do pretty much any cognitive task that humans can do. And he said he expects that there’s a 50% chance we’ll have AGI within a decade. Okay, so I emphasize that word cognitive task because that term is confusing to me. But it seems so obvious to them.

John: Yes, I mean, I think it’s the belief that everything non-physical at the task level can be written out as a kind of program or algorithm. I just don’t know… and maybe it’s true when it comes to, you know, ideas, intuitions, creativity.

Melanie: I also asked John if he thought that maybe that separation, between cognition and everything else, was a fallacy.

John: Well, it seems to me, you know, it always makes me a bit nervous to argue with you of all people about this, but I would say, I think there’s a difference between saying, can we reach human levels of intelligence when it comes to common sense, the way humans do it, versus can we end up with the equivalent phenomenon, without having to do it the way humans do it. The problem for me with that is that we, like this conversation we’re having right now, are capable of open-ended, extrapolatable thought. We go beyond what we’re talking about.

I struggle with it but I’m not going to put myself in this precarious position of denying that a lot of problems in the world can be solved without comprehension. So maybe we’re kind of a dead end — comprehension is a great trick, but maybe it’s not needed. But if comprehension requires feeling, then I don’t quite see how we’re going to get AGI in its entirety. But I don’t want to sound dogmatic. I’m just practicing my… my unease about it. Do you know what I mean? I don’t know.

Abha: Alison is also wary of over-hyping our capacity to get to AGI.

Alison: And one of the great old folk tales is called Stone Soup.

Abha: Or you might have heard it called Nail Soup — there are a few variations. She uses this stone soup story as a metaphor for how much our so-called “AI technology” actually relies on humans and the language they create.

Alison: And the basic story of Stone Soup is that, there’s some visitors who come to a village and they’re hungry and the villagers won’t share their food with them. So the visitors say, that’s fine. We’re just going to make stone soup. And they get a big pot and they put water in it. And they say, we’re going to get three nice stones and put it in. And we’re going to make wonderful stone soup for everybody.

They start boiling it. And they say, this is really good soup. But it would be even better if we had a carrot or an onion that we could put in it. And of course, the villagers go and get a carrot and onion. And then they say, this is much better. But you know, when we made it for the king, we actually put in a chicken and that made it even better. And you can imagine what happens.

All the villagers contribute all their food. And then in the end, they say, this is amazingly good soup and it was just made with three stones. And I think there’s a nice analogy to what’s happened with generative AI. So the computer scientists come in and say, look, we’re going to make intelligence just with next token prediction and gradient descent and transformers.

And then they say, but you know, this intelligence would be much better if we just had some more data from people that we could add to it. And then all the villagers go out and add all of the data of everything that they’ve uploaded to the internet. And then the computer scientists say, no, this is doing a good job at being intelligent.

But it would be even better if we could have reinforcement learning from human feedback and get all you humans to tell it what you think is intelligent or not. And all the humans say, OK, we’ll do that. And then and then it would say, you know, this is really good. We’ve got a lot of intelligence here.

But it would be even better if the humans could do prompt engineering to decide exactly how they were going to ask the questions so that the systems could do intelligent answers. And then at the end of that, the computer scientists would say, see, we got intelligence just with our algorithms. We didn’t have to depend on anything else. I think that’s a pretty good metaphor for what’s happened in AI recently.

Melanie: The way AGI has been pursued is very different from the way humans learn. Large language models, in particular, are created with tons of data shoved into the system with a relatively short training period, especially when compared to the length of human childhood. The stone soup method uses brute force to shortcut our way to something akin to human intelligence.

Alison: I think it’s just a category mistake to say things like are LLM’s smart. It’s like asking, is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that’s not really the right question. Yeah, so one of the things about humans in particular is that we’ve always had this great capacity to learn from other humans.

And one of the interesting things about that is that we’ve had different kinds of technologies over history that have allowed us to do that. So obviously language itself, you could think of as a device that lets humans learn more from other people than other creatures can do. My view is that the LLMs are kind of the latest development in our ability to get information from other people.

But again, this is not trivializing or debunking it. Those changes in our cultural technology have been among the biggest and most important social changes in our history. So writing completely changed the way that we thought and the way that we functioned and the way that we acted in the world.

At the moment, as people have pointed out, the fact that I have in my pocket a device that will let me get all the information from everybody else in the world mostly just makes me irritated and miserable most of the time. We would have thought that that would have been like a great accomplishment. But people felt that same way about writing and print when they started too. The hope is that eventually we’ll adjust to that kind of technology.

Melanie: Not everyone shares Alison’s view on this. Some researchers think that large language models should be considered to be intelligent entities, and some even argue that they have a degree of consciousness. But thinking of large language models as a type of cultural technology, instead of sentient bots that might take over the world, helps us understand how completely different they are from people. And another important distinction between large language models and humans is that they don’t have an inherent drive to explore and understand the world.

Alison: They’re just sort of sitting there and letting the data waft over them rather than actually going out and acting and sensing and finding out something new.

Melanie: This is in contrast to the one-year-old saying —

Alison: Huh, the stick works on the xylophone. Will it work on the clock or the vase or whatever else it is that you’re trying to keep the baby away from? That’s a kind of internal basic drive to generalize, to think about, okay, it works in the way that I’ve been trained, but what will happen if I go outside of the environment in which I’ve been trained? We have caregivers who have a really distinctive kind of intelligence that we haven’t studied enough, I think, who are looking at us, letting us explore.

And caregivers are very well designed to, even if it feels frustrating when you’re doing it, we’re very good at kind of getting this balance between how independent should the next agent be? How much should we be constraining them? How much should we be passing on our values? How much should we let them figure out their own values in a new environment?

And I think if we ever do have something like an intelligent AI system, we’re going to have to do that. Our role, our relationship to them should be this caregiving role rather than thinking of them as being slaves on the one hand or masters on the other hand, which tends to be the way that we think about them. And as I say, it’s not just in computer science, in cognitive science, probably for fairly obvious reasons, we know almost nothing about the cognitive science of caregiving. So that’s actually what I’m, I just got a big grant, what I’m going to do for my remaining grandmotherly cognitive science years.

Abha: That sounds very fascinating. I’ve been curious to see what comes out of that work.

Alison: Well, let me give you just a very simple first pass, our first experiment. If you ask three and four year olds, here’s Johnny and he can go on the high slide or he can go on the slide that he already knows about. And what will he do if mom’s there? And your intuitions might be, maybe the kids will say, well, you don’t do the risky thing when mom’s there because she’ll be mad about it, right? And in fact, it’s the opposite. The kids consistently say, no, if mom is there, that will actually let you explore, that will let you take risks, that will let you,

Melanie: She’s there to take you to the hospital.

Alison: Exactly, she’s there to actually protect you and make sure that you’re not doing the worst thing. But of course, for humans, it should be a cue to how important caregiving is for our intelligence. We have a much wider range of people investing in much more caregiving.

So not just mothers, but, my favorite post-menopausal grandmothers, but fathers, older siblings, what are called alloparents, just people around who are helping to take care of the kids. And it’s having that range of caregivers that actually seems to really help. And again, that should be a cue for how important this is in our ability to do all the other things we have, like be intelligent and have culture.

Melanie: If you just look at large language models, you might think we’re nowhere near anything like AGI. But there are other ways of training AI systems. Some researchers are trying to build AI models that do have an intrinsic drive to explore, rather than just consume human information.

Alison: So one of the things that’s happened is that quite understandably the success of these large models has meant that everybody’s focused on the large models. But in parallel, there’s lots of work that’s been going on in AI that is trying to get systems that look more like what we know that children are doing. And I think actually if you look at what’s gone on in robotics, we’re much closer to thinking about systems that look like they’re learning the way that children do.

And one of the really interesting developments in robotics has been the idea of building in intrinsic motivation into the systems. So to have systems that aren’t just trying to do whatever it is that you programmed it to do, like open up the door, but systems that are looking for novelty, that are curious, that are trying to maximize this value of empowerment, that are trying to find out all the range of things they could do that have consequences in the world.

And I think at the moment, the LLMs are the thing that everyone’s paying attention to, but I think that route is much more likely to be a route to really understanding a kind of intelligence that looks more like the intelligence that’s in those beautiful little fuzzy heads.

And I should say we’re trying to do that. So we’re collaborating with computer scientists at Berkeley who are exactly trying to see what would happen if we say, give an intrinsic reward for curiosity. What would happen if you actually had a system that was trying to learn in the way that the children are trying to learn?

Melanie: So are Alison and her team on their way to an AGI breakthrough? Despite all this, Alison is still skeptical.

Alison: I think it’s just again, a category mistake to say we’ll have something like artificial general intelligence, because we don’t have natural general intelligence.

Melanie: In Alison’s view, we don’t have natural general intelligence because human intelligence is not really general. Human intelligence evolved to fit our very particular human needs. So, Alison likewise doesn’t think it makes sense to talk about machines with “general intelligence”, or machines that are more intelligent than humans.

Alison: Instead, what we’ll have is a lot of systems that can do different things, that might be able to do amazing things, wonderful things, things that we can’t do. But that kind of intuitive theory that there’s this thing called intelligence that you could have more of or less of, I just don’t think it fits anything that we know from cognitive science.

It is striking how different the view of the people, not all the people, but some of the people who are also making billions of dollars out of doing AI are from, I mean, I think this is sincere, but it’s still true that their view is so different from the people who are actually studying biological intelligences.

Melanie: John suspects that there’s one thing that computers may never have: feelings.

John: It’s very interesting that I always used pain as the example. In other words, what would it mean for a computer to feel pain? And what would it mean for a computer to understand a joke? So I’m very interested in these two things. We have this physical, emotional response. We laugh, we feel good, right? So when you understand a joke, where should the credit go? Should it go to understanding it? Or should it go to the laughter and the feeling that it evokes?

And to my sort of chagrin or surprise or maybe not surprise, Daniel Dennett wrote a whole essay in one of his early books on why computers will never feel pain. He also wrote a whole book on humor. So in other words, it’s kind of wonderful in a way, that whether he would have ended up where I’ve ended up, but at least he understood the size of the mystery and the problem.

And I agree with him, if I understood his pain essay correctly, and it’s influential on what I’m going to write, I just don’t know what it means for a computer to feel pain, be thirsty, be hungry, be jealous, have a good laugh. To me, it’s a category error. Now, if thinking is the combination of feeling… and computing, then there’s never going to be deliberative thought in a computer.

Abha: While talking to John, he frequently referred to pain receptors as the example of how we humans feel with our bodies. But we wanted to know: what about the more abstract emotions, like joy, or jealousy, or grief? It’s one thing to stub your toe and feel pain radiate up from your foot. It’s another to feel pain during a romantic breakup, or to feel happy when seeing an old friend. We usually think of those as all in our heads, right?

John: You know, I’ll say something kind of personal. A close friend of mine called me today to tell me… that his younger brother had been shot and killed in Baltimore. Okay. I don’t want to be a downer. I’m saying it for a reason. And he was talking to me about the sheer overwhelming physicality of the grief that he was feeling. And, I was thinking, what can I say with words to do anything about that pain? And the answer is nothing. Other than just to try.

But seeing that kind of grief and all that it entails, even more than seeing the patients that I’ve been looking after for 25 years, is what leads to a little bit of testiness on my part when one tends to downplay this incredible mixture of meaning and loss and memory and pain. And to know that this is a human being who knows, forecasting into the future, that he’ll never see this person again. It’s not just now. Part of that pain is into the infinite future. Now, all I’m saying is we don’t know what that glorious and sad amalgam is, but I’m not going to just dismiss it away and explain it away as some sort of peripheral computation that we will solve within a couple of weeks, months or years.

Do you see? I find it just slightly enraging, actually. And I just feel that, as a doctor and as a friend, we need to know that we don’t know how to think about these things yet. Right? I just don’t know. And I am not convinced of anything yet. So I think that there is a link between physical pain and emotional pain, but I can tell you from the losses I felt, it’s physical as much as it is cognitive. So grief, I don’t know what it would mean for a computer to feel grief. I just don’t know. I think we should respect the mystery.

Abha: So Melanie, I noticed that John and Alison are both a bit skeptical about today’s approaches to AI. I mean, will it lead to anything like human intelligence? What do you think?

Melanie: Yeah, I think that today’s approaches have some limitations. Alison put a lot of emphasis on the need for an agent to be actively interacting in the world as opposed to passively just receiving language input. And for an agent to have its own intrinsic motivation in order to be intelligent. Alison interestingly sees large language models more like libraries or databases than like intelligent agents. And I really loved her stone soup metaphor where her point is that all the important ingredients of large language models come from humans.

Abha: Yeah, it’s such an interesting illustration because it sort of tells us everything that goes on behind the scene, you know, before we see the output that an LLM gives us. John seemed to think that full artificial general intelligence is impossible, even in principle. He said that comprehension requires feeling or the ability to feel one’s own internal computations. And he didn’t seem to see how computers could ever have such feelings.

Melanie: And I think most people in AI would disagree with John. Many people in AI don’t even think that any kind of embodied interaction with the world is necessary. They’d argue that we shouldn’t underestimate the power of language.

In our next episode, we’ll go deeper into the importance of this cultural technology, as Alison would put it. How does language help us learn and construct meaning? And what’s the relationship between language and thinking?

Steve: You can be really good at language without having the ability to do the kind of sequential, multi-step reasoning that seems to characterize human thinking.

Abha: That’s next time, on Complexity.

Complexity is the official podcast of the Santa Fe Institute. This episode was produced by Katherine Moncure. Our theme song is by Mitch Mignano, and additional music from Blue Dot Sessions.

I’m Abha, thanks for listening.

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Olivia Remes: How to cope with anxiety @ TEDxUHasselt

While mental health was once a topic rarely talked about in public, thankfully the stigma continues to fade away and issues are now discussed more often in forums such as TED/TEDx events. In her talk at TEDxUHasselt, Olivia Remes gives the audience a few tips on how to cope with anxiety. We’ve all felt anxious at some point in our life – most likely, many times – but in these cases the feeling goes away once the situation that caused our anxiety has passed. On the other hand, anxiety disorders are of a more serious nature.

An anxiety disorder is different from “normal” anxiety. “Abnormal” anxiety is defined by excessive and persistent worries that don’t go away, even when there’s nothing to be stressed or nervous about. With an anxiety disorder, people usually try to avoid triggering situations or things that worsen their symptoms. – Healthline Media

Olivia is primarily talking about those people affected by anxiety disorders, but the techniques that she presents have much broader application. Watch her talk, then come back to review her transcript, as well as the comments I’ve made. Notice how she begins with a story, then moves to an explanation of the topic, before shifting to the suggestions regarding how to deal with anxiety.

Transcript (my notes in red)

Olivia invites the audience into a pair of situations that may be familiar to a lot of people.

Imagine that you’re getting ready to go to a party. You feel excited, but also nervous, and you’ve got this feeling in your stomach almost like another heartbeat.

There’s something holding you back, holding you back from getting too happy. “No, you mustn’t get too happy. Better to be cautious, otherwise, something bad might happen.” You start wondering, “Who should I talk to when I get there? What if no one wants to talk to me? What if they’ll think I’m weird?”

When you arrive at the party, someone comes up to you and starts talking with you, and as this is happening, your mind starts racing, your heart begins pounding, you start sweating, and it feels almost like you’re dissociating from yourself, like it’s an out-of-body experience, and you’re just watching yourself talk.

“Keep it together,” you say to yourself, but you can’t, and it’s just getting worse. After a few minutes of conversation, the person you’ve been speaking to leaves, and you feel utterly defeated. This has been happening to you in social situations for a long time.

Or imagine that every time you go out, and you’re in crowded places, you feel this panic starting to arise. When you’re surrounded by lots of people, like on a bus, you start to feel hot, nauseous, uneasy, and to prevent this from happening, you start avoiding a lot of places which makes you feel lonely and isolated.

You or the person in both of these scenarios have anxiety disorders. And what I can tell you is that anxiety is very common, much more than people think. Right now, one in 14 people around the world have an anxiety disorder, and each year, it costs over 42 billion dollars to treat this mental health problem. To show you the impacts that anxiety has on someone’s life, I will just mention that anxiety can lead to depression, school dropout, suicide.

There are a few ways to quote statistics, and in this case, Olivia decided to say, “one in 14 people”. That calculates to 7.1%, and she could have decided to quote that statistic instead. Is one or the other easier for you to understand? Is one way more impactful than the other?

It makes it harder to focus, and to hold down a job, and it can lead to relationship breakdown. But a lot of people don’t know this. That’s why, a lot of times, people sweep anxiety under the rug, as just nerves that you need to get over, as a weakness. But anxiety is so much more than that. A reason why so many people don’t think it’s important is that they don’t know what it is. Is it your personality? Is it an illness? Is it a normal sensation? What is?

That’s why it’s important to differentiate what is normal anxiety, from what is an anxiety disorder. Normal anxiety is an emotion that we all get when we’re in stressful situations. For example, let’s say, you’re out in the woods, and you come face-to-face with a bear. This will probably make you feel a little bit anxious, and you’ll probably want to start running like crazy. This anxious feeling that you get is good because it protects you, it saves you, and it makes you want to hightail it out of there, although maybe it’s not such a good idea to start running when you see a bear. I really don’t think you can outrun a bear.

Anxiety helps us meet our deadlines at work and deal with emergencies in life, but when this anxiety emotion is taken to the extreme, and arises in situations which don’t pose a real threat, then that’s when you might have an anxiety disorder.

For example, people with generalized anxiety disorder worry excessively and constantly about everything going on in their lives, and they find it very difficult to control this worry. They also have symptoms like restlessness, fear, they find it hard to fall asleep at night, and they can’t concentrate on tasks.

It’s often the case that when presenting a scientific topic to the public they will only have a general, and sometimes minimal, level of knowledge about it. In this situation, there’s not only a need to describe anxiety disorder, but to differentiate it from our normal anxious reaction to a specific situation. If you’re story involves a technical or complex subject, not just those based in science, think about how you can explain the topic clearly to an audience in a short period of time.

In spite of whatever kind of anxiety you might be suffering from, there is something that you can do to lower it. It works, and it’s simpler than you may think. All too often, we’re given medication for mental disorders, but it doesn’t always work in the long run. Symptoms often come back, and you’re back to where you started.

So here’s something else to consider. The way you cope or handle things has a direct impact on how much anxiety you’re experiencing, and if you tweak the way you’re coping, then you can lower your anxiety. In our study at the University of Cambridge, we showed that women living in poor areas have a higher risk for anxiety than women living in richer areas. These results didn’t surprise us, but when we looked closer, we found that women living in poor areas, if they had a particular set of coping resources, they didn’t have anxiety, while women living in poor areas without these coping resources had anxiety.

When addressing topics involving health, it’s important to back up your recommendations with research – studies, experiments, clinical trials, etc. In this talk, I would have preferred that Olivia provide some details regarding the study she mentions. How many people were studied? Over what period of time? How was the study conducted? This could be done in a couple of sentences and would create a stronger foundation for her story, in my opinion.

Other studies showed that people who had faced extreme circumstances, who had faced adversity, been through wars and natural disasters, if they had coping resources, they remained healthy and free of mental disorders, while others, facing the same hardships but without coping skills went on a downward spiral and developed mental disorders.

Beyond the examples given in the opening of the talk, this represents an important addition. Wars and natural disasters happen in many countries, and the effects on those who live through them can be serious. Considering the current state of the world, she could have added climate change to the list, as it will affect everyone.

So, what are some of these coping resources, and how can we use them to lower our anxiety?And before I dive into what they are, I’d like to point out – and I think this is so interesting – you can develop these coping resources or coping skills on your own through the things that you do; you can take charge of your anxiety and lower it, which I think is so empowering.

This is where the story pivots from explaining the problem, to presenting the solution. By also mentioning that people can utilize the coping resources on their own, Olivia captures our attention, as we now know that something tangible is coming up. The essence of any impactful talk is how the audience will benefit from the message.

Today I’ll be talking about three coping resources, and the first one is feeling like you’re in control of your life. People who feel like they’re more in control of their life have better mental health. If you feel like you’re lacking in control in life, then research shows that you should engage in experiences that give you greater control. I’ll show you what I mean.

Do you sometimes find that you put off starting something because you just don’t feel ready enough? Do you find it hard to make decisions, like what to wear, what to eat, who to date, which job to take up? Do you tend to waste a lot of time deciding what you might do while nothing gets done?

A way to overcome indecision and this lack of control in life, is to do it badly. There’s a quote by writer and poet GK Chesterton that says, “Anything worth doing, is worth doing badly the first time.” The reason why this works so well is that it speeds up your decision-making and catapults you straight into action, otherwise, you can spend hours deciding how you should go about doing something, or what you should do.

This can be paralyzing and can make you afraid to even begin. All too often, we aim for perfection, but never end up doing anything because the standards that we set for ourselves are too high, they’re intimidating, which stresses us out, so we delay starting something, or we might even abandon the whole thing altogether. Do it badly frees you up to take action.

I mean, you know how it is. So often, we want to do something perfectly. We can’t start until it’s the perfect time, until we’ve got all the skills. But this can be daunting and stressful, so why not just jump into it, just do it however, without worrying if it’s good or bad? This will make it that much easier to start something, and as you’re doing it badly to finish it, and when you look back, you’ll realize, more often than not, that actually, it’s not that bad.

A close friend of mine who has anxiety started using this motto, and this is what she said, “When I started using this motto, my life transformed. I found I could complete tasks in much shorter time periods than before. Do it badly gave me wings to take risks, to try something differently, and to have way more fun during the whole process. It took the anxiety out of everything and replaced it with excitement.” So do it badly, and you can improve as you go along. I’d like to ask you to think about this. If you start using this motto today, how would your life change?

Olivia explains her first coping technique – do it badly – in a simple, straight forward fashion, and also tells a story about someone who actually tried it. I would have framed this example by stating that there are times when the technique is not appropriate – when doing something badly can be dangerous, to yourself or others. There are times when we should wait until our skill level is adequate. If your story contains recommendations, consider whether a caveat needs to be included.

The second coping strategy is to forgive yourself, and this is very powerful if you use it. People with anxiety think a lot about what they’re doing wrong, their worries, and how bad they’re feeling. Imagine if you had a friend who constantly pointed out everything that you’re doing wrong, and everything that was wrong with your life. You would probably want to get rid of this person right away, wouldn’t you? Well, people with anxiety do this to themselves all day long. They’re not kind to themselves.

So maybe it’s time to start being kinder with ourselves, time to start supporting ourselves. And a way to do this, is to forgive yourself for any mistakes you think you might have made just a few moments ago, to mistakes made in the past. If you had a panic attack and are embarrassed about it, forgive yourself. If you wanted to talk to someone, but couldn’t muster up the courage to do so, don’t worry about it, let it go. Forgive yourself for anything and everything, and this will give you greater compassion towards yourself. You can’t begin to heal until you do this.

And last, but not least, having a purpose and meaning in life is a very important coping mechanism. Whatever we do in life, whatever work we produce, however much money we make, we cannot be fully happy until we know that someone else needs us. That someone else depends on our accomplishments, or on the love that we have to share. It’s not that we need other people’s good words to keep going in life, but if we don’t do something with someone else in mind, then we’re at much higher risk for poor mental health.

The famous neurologist Dr. Victor Frankel said, “For people who think there’s nothing to live for, and nothing more to expect from life, the question is getting these people to realize that life is still expecting something from them.”

Doing something with someone else in mind can carry you through the toughest times. You’ll know the why for your existence and will be able to bear almost any how. Almost any how. So the question is, do you do at least one thing with someone else in mind? This could be volunteering, or it could be sharing this knowledge that you gained today with other people, especially those who need it most, and these are often the people who don’t have money for therapy, and they’re usually the ones with the highest rates of anxiety disorders. Give it to them, share with others, because it can really improve your mental health.

Olivia’s second and third coping resources – regarding self forgiveness and having purpose – are topics that could be the basis of their own talk, but once again, she presents them in an easy and accessible fashion. The audience now has three techniques that they can practice on their own. Should she have also mentioned that anyone experiencing more serious issues should seek out professional help? Are the ideas you present applicable in any situation, or are there limits?

So I would like to conclude with this. Another way you can do something with someone else in mind is finishing work that might benefit future generations. Even if these people will never realize what you’ve done for them, it doesn’t matter, because you will know, and this will make you realize the uniqueness and importance of your life.

One the one hand, I appreciate the message that Olivia ends with – realizing the importance of our life by serving others – that’s very powerful, but it’s basically an extension of her third technique. It’s not a summation of the stories central theme of coping with anxiety. For me, it’s missing that wrap-up.

Thank you.

[Note: all comments inserted into this transcript are my opinions, not those of the speaker, the TED organization, nor anyone else on the planet. In my view, each story is unique, as is every interpretation of that story. The sole purpose of these analytical posts is to inspire a storyteller to become a storylistener, and in doing so, make their stories more impactful.]

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to our newsletter for the latest updates!

Copyright Storytelling with Impact – All rights reserved

Heather Barnett: What humans can learn from semi-intelligent slime @ TEDSalon Berlin

I had the pleasure of attending a special TED event in 2014. TEDSalon Berlin was just a one day affair, yet it featured a number of compelling talks that served as examples of impactful stories on global issues. This post is an analysis of a talk given by Heather Barnett on a most unusual character – a slime mold.

Watch Heather Barnett’s TED Talk. From what seems to be an unusual subject we come to see our human experience differently. It’s not easy to take people on a journey from something unfamiliar to something universal, but Heather does so masterfully.

Transcript

(my notes in red)

I’d like to introduce you to an organism: a slime mold, Physarum polycephalum. It’s a mold with an identity crisis, because it’s not a mold, so let’s get that straight to start with. It is one of 700 known slime molds belonging to the kingdom of the amoeba. It is a single-celled organism, a cell, that joins together with other cells to form a mass super-cell to maximize its resources. So within a slime mold you might find thousands or millions of nuclei, all sharing a cell wall, all operating as one entity. In its natural habitat, you might find the slime mold foraging in woodlands, eating rotting vegetation, but you might equally find it in research laboratories, classrooms, and even artists’ studios.

Great opening lines capture the attention of an audience, and one of the most powerful ways to do this is by way of curiosity, which is what occurs when your topic is something that the listener or reader has never heard of. And while using technical jargon can be an impediment to curiosity when left to its own devices, Heather provides us with a vivid description of what ‘Physarum polycephalum’ is all about.

From a physicality standpoint, she holds up pinched fingers when mentioning ‘single-celled organism’, then spreads her arms shoulder width when stating ‘joins together with other cells’ and spreads her arms further when using the term ‘mass super-cell’.

These are subtle gestures, yet they reinforce the visual of how this organism operates. Watch her movements and gestures throughout the telling of this story. There’s much to learn here about stage presence that is both natural and impactful.

I first came across the slime mold about five years ago. A microbiologist friend of mine gave me a petri dish with a little yellow blob in it and told me to go home and play with it. The only instructions I was given, that it likes it dark and damp and its favorite food is porridge oats. I’m an artist who’s worked for many years with biology, with scientific processes, so living material is not uncommon for me.

I’ve worked with plants, bacteria, cuttlefish, fruit flies. So I was keen to get my new collaborator home to see what it could do. So I took it home and I watched. I fed it a varied diet. I observed as it networked. It formed a connection between food sources. I watched it leave a trail behind it, indicating where it had been. And I noticed that when it was fed up with one petri dish, it would escape and find a better home.

While we might have thought that Heather was a scientist – after all, who other than a scientist would talk about slime mold – we learn that she is, in fact, an artist, which tells our brain to shift gears and be ready for a different perspective on the topic.

Audiences want to know who you are, and why you’re so interested in the topic of your story. For experience-driven stories, those answers tend to be more obvious, but for idea-driven stories, you need to weave in those details.

I captured my observations through time-lapse photography. Slime mold grows at about one centimeter an hour, so it’s not really ideal for live viewing unless there’s some form of really extreme meditation, but through the time lapse, I could observe some really interesting behaviors. For instance, having fed on a nice pile of oats, the slime mold goes off to explore new territories in different directions simultaneously. When it meets itself, it knows it’s already there, it recognizes it’s there, and instead retreats back and grows in other directions. I was quite impressed by this feat, at how what was essentially just a bag of cellular slime could somehow map its territory, know itself, and move with seeming intention.

Imagine hearing this story without the benefit of Heather’s time-lapse photography. The story can be told, but the moving images make her description much more dramatic. Her use of images in the balance of her talk serve to increase impact. They say what can’t be easily described in full. Imagine how your words and images will play out in someone’s mind.

I found countless scientific studies, research papers, journal articles, all citing incredible work with this one organism, and I’m going to share a few of those with you.

For example, a team in Hokkaido University in Japan filled a maze with slime mold. It joined together and formed a mass cell. They introduced food at two points, oats of course, and it formed a connection between the food. It retracted from empty areas and dead ends. There are four possible routes through this maze, yet time and time again, the slime mold established the shortest and the most efficient route. Quite clever. The conclusion from their experiment was that the slime mold had a primitive form of intelligence.

Another study exposed cold air at regular intervals to the slime mold. It didn’t like it. It doesn’t like it cold. It doesn’t like it dry. They did this at repeat intervals, and each time, the slime mold slowed down its growth in response. However, at the next interval, the researchers didn’t put the cold air on, yet the slime mold slowed down in anticipation of it happening. It somehow knew that it was about the time for the cold air that it didn’t like. The conclusion from their experiment was that the slime mold was able to learn.

A third experiment: the slime mold was invited to explore a territory covered in oats. It fans out in a branching pattern. As it goes, each food node it finds, it forms a network, a connection to, and keeps foraging. After 26 hours, it established quite a firm network between the different oats. Now there’s nothing remarkable in this until you learn that the center oat that it started from represents the city of Tokyo, and the surrounding oats are suburban railway stations.

The slime mold had replicated the Tokyo transport network – a complex system developed over time by community dwellings, civil engineering, urban planning. What had taken us well over 100 years took the slime mold just over a day. The conclusion from their experiment was that the slime mold can form efficient networks and solve the traveling salesman problem.

It is a biological computer. As such, it has been mathematically modeled, algorithmically analyzed. It’s been sonified, replicated, simulated. World over, teams of researchers are decoding its biological principles to understand its computational rules and applying that learning to the fields of electronics, programming and robotics.

The best way to make a scientific point, especially when you’re not a scientist, is to reference published work from scientists who are subject matter experts in regards to your subject. Not citing bona fide evidence, and simply making claims as though they are facts, will often create doubt in the minds of the audience. You’re not an expert in the field, so why should they believe you? In this case, however, Heather cites three scientific studies that illustrate a central theme of her story – intelligence.

So the question is, how does this thing work? It doesn’t have a central nervous system. It doesn’t have a brain, yet it can perform behaviors that we associate with brain function. It can learn, it can remember, it can solve problems, it can make decisions. So where does that intelligence lie? So this is a microscopy, a video I shot, and it’s about 100 times magnification, sped up about 20 times, and inside the slime mold, there is a rhythmic pulsing flow, a vein-like structure carrying cellular material, nutrients and chemical information through the cell, streaming first in one direction and then back in another. And it is this continuous, synchronous oscillation within the cell that allows it to form quite a complex understanding of its environment, but without any large-scale control center. This is where its intelligence lies.

A classic shift in idea-driven narratives is moving from the ‘what’ to the ‘how’ – ‘what happens’ to ‘how it happens’. Other shifts may involve exploring the why, when and where aspects. This process of exploration is about moving the audience to ever deeper levels of their understanding. Taking someone on a journey is often related to space or time, but also applies to knowledge. Think about how you can unfold a complex topic, doing so in such a way that the listener can follow along. Each layer is a foundation for the next.

So it’s not just academic researchers in universities that are interested in this organism. A few years ago, I set up SliMoCo, the Slime Mould Collective. It’s an online, open, democratic network for slime mold researchers and enthusiasts to share knowledge and experimentation across disciplinary divides and across academic divides. The Slime Mould Collective membership is self-selecting. People have found the collective as the slime mold finds the oats. And it comprises of scientists and computer scientists and researchers but also artists like me, architects, designers, writers, activists, you name it. It’s a very interesting, eclectic membership.

Just a few examples: an artist who paints with fluorescent Physarum; a collaborative team who are combining biological and electronic design with 3D printing technologies in a workshop; another artist who is using the slime mold as a way of engaging a community to map their area. Here, the slime mold is being used directly as a biological tool, but metaphorically as a symbol for ways of talking about social cohesion, communication and cooperation.

From talking about the slime mold, the story comes back to Heather, and a collective that she created in order to further the understanding of this subject. The narrative then expands to include other people who are part of the collective and what they’ve done. Stories of other people is a Story Block which broadens the narrative beyond the speaker’s experience.

Other public engagement activities; I run lots of slime mold workshops, a creative way of engaging with the organism. So people are invited to come and learn about what amazing things it can do, and they design their own petri dish experiment, an environment for the slime mold to navigate so they can test its properties. Everybody takes home a new pet and is invited to post their results on the Slime Mould Collective. And the collective has enabled me to form collaborations with a whole array of interesting people. I’ve been working with filmmakers on a feature-length slime mold documentary, and I stress feature-length, which is in the final stages of edit and will be hitting your cinema screens very soon.

It’s also enabled me to conduct what I think is the world’s first human slime mold experiment. This is part of an exhibition in Rotterdam last year. We invited people to become slime mold for half an hour. So we essentially tied people together so they were a giant cell, and invited them to follow slime mold rules. You have to communicate through oscillations, no speaking. You have to operate as one entity, one mass cell, no egos, and the motivation for moving and then exploring the environment is in search of food. So a chaotic shuffle ensued as this bunch of strangers tied together with yellow ropes wearing “Being Slime Mold” t-shirts wandered through the museum park.

When they met trees, they had to reshape their connections and reform as a mass cell through not speaking. This is a ludicrous experiment in many, many ways. This isn’t hypothesis-driven. We’re not trying to prove, demonstrate anything. But what it did provide us was a way of engaging a broad section of the public with ideas of intelligence, agency, autonomy, and provide a playful platform for discussions about the things that ensued.

One of the most exciting things about this experiment was the conversation that happened afterwards. An entirely spontaneous symposium happened in the park. People talked about the human psychology, of how difficult it was to let go of their individual personalities and egos. Other people talked about bacterial communication. Each person brought in their own individual interpretation, and our conclusion from this experiment was that the people of Rotterdam were highly cooperative, especially when given beer. We didn’t just give them oats. We gave them beer as well.

How your idea and passion integrates into society can be an important part of your story. Outside of the laboratory, and beyond art or science, Heather engages people to learn in a very tangible way. They were involved, had to make decisions, but also had fun doing it. Is there a similar set of experiences that you can include in your story to demonstrate how your idea can affect the way people think and act?

But they weren’t as efficient as the slime mold, and the slime mold, for me, is a fascinating subject matter. It’s biologically fascinating, it’s computationally interesting, but it’s also a symbol, a way of engaging with ideas of community, collective behavior, cooperation. A lot of my work draws on the scientific research, so this pays homage to the maze experiment but in a different way. And the slime mold is also my working material. It’s a coproducer of photographs, prints, animations, participatory events.

Whilst the slime mold doesn’t choose to work with me, exactly, it is a collaboration of sorts. I can predict certain behaviors by understanding how it operates, but I can’t control it. The slime mold has the final say in the creative process. And after all, it has its own internal aesthetics. These branching patterns that we see we see across all forms, scales of nature, from river deltas to lightning strikes, from our own blood vessels to neural networks. There’s clearly significant rules at play in this simple yet complex organism, and no matter what our disciplinary perspective or our mode of inquiry, there’s a great deal that we can learn from observing and engaging with this beautiful, brainless blob.

I give you Physarum polycephalum.

It’s a powerful story that can begin with something we feel is insignificant – slime mold – and take us to a place where we are thinking about how humans interact with each other. After seeing this talk I began to view society differently. The chaos that occurs when we act too much as individuals, and the success that we can achieve when we work together.

There’s not any direct calls to action. Instead, this is a thought provoking narrative that offers a new perspective for the audience to do with as they wish.

[Note: all comments inserted into this transcript are my opinions, not those of the speaker, the TED organization, nor anyone else on the planet. In my view, each story is unique, as is every interpretation of that story. The sole purpose of these analytical posts is to inspire a storyteller to become a storylistener, and in doing so, make their stories more impactful.]

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to our newsletter for the latest updates!

Copyright Storytelling with Impact – All rights reserved