Nature of Intelligence – Episode One – What is Intelligence?

I tend to think of storytelling as sitting at the intersection of four elements:

  • Consciousness — awareness of self, the environment, and our thoughts
  • Intelligence — ability to learn, understand, reason, and solve problems
  • Imagination — create mental images, ideas, or concepts beyond reality
  • Creativity — generate original ideas, solutions, and artistic expressions

They’re different terms, of course, yet you can see how they interact with each other. It’s also apparent that they’re involved in the process of creating stories. They’re so fundamental, in fact, that they go a long way towards describing what makes us human. But the funny thing is, science doesn’t know how to accurately define any of these concepts.

While thousands of hours have been spent seeking answers, and scientists can talk for days on end about their findings, it is still a mystery. Take Shakespeare, for example. How did he utilize these aspects of humanity to create something as magical as Hamlet? And if we can’t properly describe one of these elements, how do we explain how they work together? And extending beyond us mortals, will AI ever be able to replicate this magic?

So when I ran across the third season of Santa Fe Institute’s Complexity podcast, which is devoted to the exploration of Intelligence, I had to listen in, and if you’re interesting in how we create stories in our head, I recommend you do the same, as it looks at the concept of intelligence through a human lens, as well as from the lens of artificial intelligence.

17th Century Playwrite in England
There’s so much information in this first episode, but I wanted to share four quotes that intrigued me. First off is this notion of “common sense”. It seems simple, but again, it’s illusive to capture in words. How would you describe it?

Common sense gives us basic assumptions that help us move through the world and know what to do in new situations. But it gets more complicated when you try to define exactly what common sense is and how it’s acquired. ~ Melanie Mitchell

This notion of an equivalent phenomenon describes much of the human / AI debate, as there is a sense that a machine will never be human, but maybe it can be close enough.

I think there’s a difference between saying, can we reach human levels of intelligence when it comes to common sense, the way humans do it, versus can we end up with the equivalent phenomenon, without having to do it the way humans do it. ~ John Krakauer

This goes back to the reality that we don’t know what makes humans human, so how are we to compare a computer algorithm to what it means to be us?

I think it’s just again, a category mistake to say we’ll have something like artificial general intelligence, because we don’t have natural general intelligence. ~ Alison Gopnik

But we’re more than thinking animals. We have emotions. Fall in love, feel pain, express joy and sorrow. Or in this case, grief. Computers are learning how to simulate emotions such as grief, but is that even possible?

I don’t know what it would mean for a computer to feel grief. I just don’t know. I think we should respect the mystery. ~ John Krakauer

So here goes, take a listen to Episode 1 and see what you think. The transcript is below if you feel so inclined (as I did) to follow along. It’s some heady stuff.

Transcript

Alison Gopnik: It’s like asking, is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that’s not really the right question.

Abha Eli Phoboo: From the Santa Fe Institute, this is Complexity.

Melanie Mitchell: I’m Melanie Mitchell.

Abha: And I’m Abha Eli Phoboo.

Abha: Today’s episode kicks off a new season for the Complexity podcast, and with a new season comes a new theme. This fall, we’re exploring the nature and complexity of intelligence in six episodes — what it means, who has it, who doesn’t, and if machines that can beat us at our own games are as powerful as we think they are. The voices you’ll hear were recorded remotely across different locations, including countries, cities and work spaces. But first, I’d like you to meet our new co-host.

Melanie: My name is Melanie Mitchell. I’m a professor here at the Santa Fe Institute. I work on artificial intelligence and cognitive science. I’ve been interested in the nature of intelligence for decades. I want to understand how humans think and how we can get machines to be more intelligent, and what it all means.

Abha: Melanie, it’s such a pleasure to have you here. I truly can’t think of a better person to guide us through what, exactly, it means to call something intelligent. Melanie’s book, Artificial Intelligence: A Guide for Thinking Humans, is one of the top books on AI recommended by The New York Times. It’s a rational voice among all the AI hype in the media.

Melanie: And depending on whom you ask, artificial intelligence is either going to solve all humanity’s problems, or it’s going to kill us. When we interact with systems like Google Translate, or hear the buzz around self-driving cars, or wonder if ChatGPT actually understands human language, it can feel like AIis going to transform everything about the way we live. But before we get carried away making predictions about AI, it’s useful to take a step back. What does it mean to call anything intelligent, whether it’s a computer or an animal or a human child?

Abha: In this season, we’re going to hear from cognitive scientists, child development specialists, animal researchers, and AI experts to get a sense of what we humans are capable of and how AI models actually compare. And in the sixth episode, I’ll sit down with Melanie to talk about her research and her views on AI.

Melanie: To kick us off, we’re going to start with the broadest, most basic question: what really is intelligence, anyway? As many researchers know, the answer is more complicated than you might think.

Melanie: Part One: What is intelligence?

Alison: I’m Alison Gopnik. I’m a professor of psychology and affiliate professor of philosophy and a member of the Berkeley AI Research group. And I study how children manage to learn as much as they do, particularly in a sort of computational context. What kinds of computations are they performing in those little brains that let them be the best learners we know of in the universe?

Abha: Alison is also an external professor with the Santa Fe Institute, and she’s done extensive research on children and learning. When babies are born, they’re practically little blobs that can’t hold up their own heads. But as we all know, most babies become full-blown adults who can move, speak, and solve complex problems. From the time we enter this world, we’re trying to figure out what the heck is going on all around us, and that learning sets the foundation for human intelligence.

Alison: Yeah, so one of the things that is really, really important about the world is that some things make other things happen. So everything from thinking about the way the moon affects the tides to just the fact that I’m talking to you and that’s going to make you change your minds about things. Or the fact that I can pick up this cup and spill the water and everything will get wet. Those really basic cause and effect relationships are incredibly important.

And they’re important partly because they let us do things. So if I know that something is gonna cause a particular effect, what that means is if I wanna bring about that effect, I can actually go out in the world and do it. And it underpins everything from just our everyday ability to get around in the world, even for an infant, to the most incredible accomplishments of science. But at the same time, those causal relationships are kind of mysterious and always have been. How is it? After all, all we see is that one thing happens and another thing follows it. How do we figure out that causal structure?

Melanie: So how do we?

Alison: Yeah, good question. So that’s been a problem philosophers have thought about for centuries. And there’s basically two pieces. And anyone who’s done science will recognize these two pieces. We analyze statistics. So we look at what the dependencies are between one thing and another. And we do experiments. We go out, perhaps the most important way that we understand about causality is you do something and then you see what happens and then you do something again and you say, wait a minute, that happened again.

And part of what I’ve been doing recently, which has been really fun, is just look at babies, even like one year olds. And if you just sit and look at a one year old, mostly what they’re doing is doing experiments. I have a lovely video of my one-year-old grandson with a xylophone and a mallet.

Abha: Of course, we had to ask Alison to show us the video. Her grandson is sitting on the floor with the xylophone, while his grandfather plays an intricate song on the piano. Together, they make a strange duet.

And it’s not just that he makes the noise. He tries turning the mallet upside down. He tries with his hand a bit. That doesn’t make a noise. He tries with a stick end. That doesn’t make a noise. Then he tries it on one bar and it makes one noise. Another bar, it makes another noise. So when the babies are doing the experiments, we call it getting into everything. But I increasingly think that’s their greatest motivation.

Abha: So babies and children are doing these cause and effect experiments constantly, and that’s a major way that they learn. At the same time, they’re also figuring out how to move and use their bodies, developing a distinct intelligence in their motor systems so they can balance, walk, use their hands, turn their heads, and eventually, move in ways that don’t even require much thinking at all.

Melanie: One of the leading researchers on intelligence and physical movement is John Krakauer, a professor of neurology, neuroscience, physical medicine, and rehabilitation at the Johns Hopkins University School of Medicine. John’s also in the process of writing a book.

John Krakauer: I am. I’ve been writing it for much longer than I expected, but now I finally know the story I want to tell. I’ve been practicing it.

Melanie: Well, let me ask, I just want to mention that the subtitle is Thinking versus Intelligence in Animals, Machines and Humans. So I wanted to get your take on what is thinking and what is intelligence.

John: Oh my gosh, thanks Melanie for such an easy softball question.

Melanie: Well, you’re writing a book about it.

John: Well, yes, so… I think I was very inspired by two things. One was how much intelligent adaptive behavior your motor system has even when you’re not thinking about it. The example I always give is when you press an elevator button before you lift your arm to press the button, you contract your gastrocnemius in anticipation that your arm is sufficiently heavy, that if you didn’t do that, you’d fall over because your center of gravity has shifted. So there are countless examples of intelligent behaviors. In other words, they’re goal-directed and accomplish the goal below the level of overt deliberation or awareness.

And then there’s a whole field, what are called long latency stretch reflexes, these below the time of voluntary movement, but sufficiently flexible to be able to deal with quite a lot of variation in the environment and still get the goal accomplished, but it’s still involuntary.

Abha: There’s a lot that we can do without actually understanding what’s happening. Think about the muscles we use to swallow food, or balance on a bike, for example. Learning how to ride a bike takes a lot of effort, but once you’ve figured it out, it’s almost impossible to explain it to someone else.

John: And so it’s what, Daniel Dennett, you know, who recently passed away, but was very influential for me with what he called, competence with comprehension versus competence without comprehension. And, you know, I think he also was impressed by how much competence there is in the absence of comprehension. And yet along came this extra piece, the comprehension, which added to competence and greatly increased the repertoire of our competences.

Abha: Our bodies are competent in some ways, but when we use our minds to understand what’s going on, we can do even more. To go back to Alison’s example of her grandson playing with a xylophone, comprehension allows him, or anyone, playing with a xylophone mallet to learn that each side of it makes a different sound.

If you or I saw a xylophone for the first time, we would need to learn what a xylophone is, what a mallet is, how to hold it, and which end might make a noise if we knocked it against a musical bar. We’re aware of it. Over time we internalize these observations so that every time we see a xylophone mallet, we don’t need to think through what it is and what the mallet is supposed to do.

Melanie: And that brings us to another, crucial part of human intelligence: common sense. Common sense is knowing that you hold a mallet by the stick end and use the round part to make music. And if you see another instrument, like a marimba, you know that the mallet is going to work the same way. Common sense gives us basic assumptions that help us move through the world and know what to do in new situations. But it gets more complicated when you try to define exactly what common sense is and how it’s acquired.

John: Well, I mean, to me, common sense is the amalgam of stuff that you’re born with. So you, you know, any animal will know that if it steps over the edge, it’s going to fall. Right. What you’ve learned through experience that allows you to do quick inference.

So in other words, you know, an animal, it starts raining, it knows it has to find shelter. Right? So in other words, presumably it learns that you don’t want to be wet, and so it makes the inference it’s going to get wet, and then it finds a shelter. It’s a common sense thing to do in a way.

And then there’s the thought version of common sense. Right? It’s common sense that if you’re approaching a narrow alleyway, your car’s not gonna fit in it. Or if you go to a slightly less narrow one, your door won’t open when you open the door. Countless interactions between your physical experience, your innate repertoire, and a little bit of thinking. And it’s that fascinating mixture of fact and inference and deliberation. And then we seem to be able to do it over a vast number of situations, right?

In other words, we just seem to have a lot of facts, a lot of innate understanding of the physical world, and then we seem to be able to think with those facts. And those innate awarenesses. That, to me, is what common sense is. It’s this almost language-like flexibility of thinking with our facts and thinking with our innate sense of the physical world and combinatorially doing it all the time, thousands of times a day. I know that’s a bit waffly. I’m sure Melanie can do a much better job at me than that, but that’s how I see it.

Melanie: No, I think that’s actually a great exposition of what it means. I totally agree. I think it is fast inference about new situations that combines knowledge and sort of reasoning, fast reasoning, and a lot of very basic knowledge that’s not really written down anywhere that we happen to know because we exist in the physical world and we interact with it.

Melanie: So, observing cause and effect, developing motor reflexes, and strengthening common sense are all happening and overlapping as children get older.

Abha: And we’re going to cover one more type of intelligence that seems to be unique to humans, and that’s the drive to understand the world.

John: It turns out, for reasons that physicists have puzzled over, that the universe is understandable, explainable, and manipulatable. The side effect of understanding the world is understandable, is you begin to understand sunsets and why the sky is blue and how black holes work and why water is a liquid and then a gas. It turns out that these are things worth understanding because you can then manipulate and control the universe. And it’s obviously advantageous because humans have taken over entirely.

I have a fancy microphone that I can have a Zoom call with you with. An understandable world is a manipulable world. As I always say, an arctic fox trotting very well across the arctic tundra is not going, “hmm, what’s ice made out of?” It doesn’t care. Now we, for some point between chimpanzees and us, started to care about how the world worked. And it obviously was useful because we could do all sorts of things. Fire, shelter, blah blah blah.

Abha: And in addition to understanding the world, we can observe ourselves observing, a process known as metacognition. If we go back to the xylophone, metacognition is thinking, “I’m here, learning about this xylophone. I now have a new skill.”

And metacognition is what lets us explain what a xylophone is to other people, even if we don’t have an actual xylophone in front of us. Alison explains more.

Alison: So the things that I’ve been emphasizing are these kinds of external exploration and search capacities, like going out and doing experiments. But we know that people, including little kids, do what you might think of as sort of internal search. So they learn a lot, and now they just intrinsically, internally want to say, “what are some things, new conclusions I could draw, new ideas I could have based on what I already know?”

And that’s really different from just what are the statistical patterns in what I already know. And I think two capacities that are really important for that are metacognition and also one that Melanie’s looked at more than anyone else, which is analogy. So being able to say, okay, here’s all the things that I think, but how confident am I about that? Why do I think that? How could I use that learning to learn something new?

Or saying, here’s the things that I already know. Here’s an analogy that would be really different, right? So I know all about how water works. Let’s see, if I think about light, does it have waves the same way that water has waves? So actually learning by just thinking about what you already know.

John: I find myself constantly changing my position on the one hand, this human capacity to sort of look at yourself computing, a sort of meta-cognition, which is consciousness not just of the outside world and of your body, it’s consciousness of your processing of the outside world and your body. It’s almost as though you used consciousness to look inward at what you were doing. Humans have computations and feelings. They have a special type of feeling and computation which together is deliberative. And that’s what I think thinking is, it’s feeling your computations.

Melanie: What John is saying is that humans have conscious feelings — our sensations such as hunger or pain — and that our brains perform unconscious computations, like the muscle reflexes that happen when we press an elevator button. What he calls deliberative thought is when we have conscious feelings or awareness about our computations.

You might be solving a math problem and realize with dismay that you don’t know how to solve it. Or, you might get excited if you know exactly what trick will work. This is deliberative thought — having feelings about your internal computations. To John, the conscious and unconscious computations are both “intelligent,” but only the conscious computations count as “thinking”.

Abha: So Melanie, having listened to John and Alison, I’d like to go back to our original question with you. What do you think is intelligence?

Melanie: Well, let me recap some of what Alison and John said. Alison really emphasized the ability to learn about cause and effect.

What causes what in the world and how we can predict what’s going to happen. And she pointed out that the way we learn this, adults and especially kids, by doing little experiments, interacting with the world and seeing what happens and learning about cause and effect that way. She also stressed our ability to generalize, to make analogies, how situations might be similar to each other in an abstract way. And this underlies what we would call our common sense, that is our basic understanding of the world.

Abha: Yeah, that example of the xylophone and the mallet, that was very intriguing. As both John and Alison said, humans seem to have a unique drive to gain an understanding of the world via experiments like making mistakes, trying things out. And they both emphasize this important role of metacognition or reasoning about one’s own thinking. What do you think of that? You know, how important do you think metacognition is?

Melanie: It’s absolutely essential to human intelligence. It’s really what underlies, I think, our uniqueness. John, you know, made this distinction between intelligence and thinking. To him, you know, most of our, what he would call our intelligent behavior is unconscious. It doesn’t involve metacognition. He called it competence without comprehension. And he reserved the term thinking for conscious awareness of what he called one’s internal computations.

Abha: Even though John and Alison have given us some great insights about what makes us smart, I think both would admit that no one has come to a full, complete understanding of how human intelligence works, right?

Melanie: Yeah, we’re far from that. But in spite of that, big tech companies like OpenAI and DeepMind are spending huge amounts of money in an effort to make machines that, as they say, will match or exceed human intelligence. So how close are they to succeeding? Well, in part two, we’ll look at how systems like ChatGPT learn and whether or not they’re even intelligent at all.

Abha: Part two: How intelligent are today’s machines?

Abha: If you’ve been following the news around AI, you may have heard the acronym LLM, which stands for large language model. It’s the term that’s used to describe the technology behind systems like ChatGPT from OpenAI or Gemini from Google. LLMs are trained to find statistical correlations in language, using mountains of text and other data from the internet. In short, if you ask ChatGPT a question, it will give you an answer based on what it has calculated to be the most likely response, based on the vast amount of information it’s ingested.

Melanie: Humans learn by living in the world — we move around, we do little experiments, we build relationships, and we feel. LLMs don’t do any of this. But they do learn from language, which comes from humans and human experience, and they’re trained on a lot of it. So does this mean that LLMs could be considered to be intelligent? And how intelligent can they, or any form of AI, become?

Abha: Several tech companies have an explicit goal to achieve something called artificial general intelligence, or AGI. AGI has become a buzzword, and everyone defines it a bit differently. But, in short, AGI is a system that has human level intelligence. Now, this assumes that a computer, like a brain in a jar, can become just as smart, or even smarter, than a human with a feeling body. Melanie asked John what he thought about this.

Melanie: You know, I find it confusing when people like Demis Hassibis, who’s the founder, one of the co-founders of DeepMind, and he an interview that AGI is a system that should be able to do pretty much any cognitive task that humans can do. And he said he expects that there’s a 50% chance we’ll have AGI within a decade. Okay, so I emphasize that word cognitive task because that term is confusing to me. But it seems so obvious to them.

John: Yes, I mean, I think it’s the belief that everything non-physical at the task level can be written out as a kind of program or algorithm. I just don’t know… and maybe it’s true when it comes to, you know, ideas, intuitions, creativity.

Melanie: I also asked John if he thought that maybe that separation, between cognition and everything else, was a fallacy.

John: Well, it seems to me, you know, it always makes me a bit nervous to argue with you of all people about this, but I would say, I think there’s a difference between saying, can we reach human levels of intelligence when it comes to common sense, the way humans do it, versus can we end up with the equivalent phenomenon, without having to do it the way humans do it. The problem for me with that is that we, like this conversation we’re having right now, are capable of open-ended, extrapolatable thought. We go beyond what we’re talking about.

I struggle with it but I’m not going to put myself in this precarious position of denying that a lot of problems in the world can be solved without comprehension. So maybe we’re kind of a dead end — comprehension is a great trick, but maybe it’s not needed. But if comprehension requires feeling, then I don’t quite see how we’re going to get AGI in its entirety. But I don’t want to sound dogmatic. I’m just practicing my… my unease about it. Do you know what I mean? I don’t know.

Abha: Alison is also wary of over-hyping our capacity to get to AGI.

Alison: And one of the great old folk tales is called Stone Soup.

Abha: Or you might have heard it called Nail Soup — there are a few variations. She uses this stone soup story as a metaphor for how much our so-called “AI technology” actually relies on humans and the language they create.

Alison: And the basic story of Stone Soup is that, there’s some visitors who come to a village and they’re hungry and the villagers won’t share their food with them. So the visitors say, that’s fine. We’re just going to make stone soup. And they get a big pot and they put water in it. And they say, we’re going to get three nice stones and put it in. And we’re going to make wonderful stone soup for everybody.

They start boiling it. And they say, this is really good soup. But it would be even better if we had a carrot or an onion that we could put in it. And of course, the villagers go and get a carrot and onion. And then they say, this is much better. But you know, when we made it for the king, we actually put in a chicken and that made it even better. And you can imagine what happens.

All the villagers contribute all their food. And then in the end, they say, this is amazingly good soup and it was just made with three stones. And I think there’s a nice analogy to what’s happened with generative AI. So the computer scientists come in and say, look, we’re going to make intelligence just with next token prediction and gradient descent and transformers.

And then they say, but you know, this intelligence would be much better if we just had some more data from people that we could add to it. And then all the villagers go out and add all of the data of everything that they’ve uploaded to the internet. And then the computer scientists say, no, this is doing a good job at being intelligent.

But it would be even better if we could have reinforcement learning from human feedback and get all you humans to tell it what you think is intelligent or not. And all the humans say, OK, we’ll do that. And then and then it would say, you know, this is really good. We’ve got a lot of intelligence here.

But it would be even better if the humans could do prompt engineering to decide exactly how they were going to ask the questions so that the systems could do intelligent answers. And then at the end of that, the computer scientists would say, see, we got intelligence just with our algorithms. We didn’t have to depend on anything else. I think that’s a pretty good metaphor for what’s happened in AI recently.

Melanie: The way AGI has been pursued is very different from the way humans learn. Large language models, in particular, are created with tons of data shoved into the system with a relatively short training period, especially when compared to the length of human childhood. The stone soup method uses brute force to shortcut our way to something akin to human intelligence.

Alison: I think it’s just a category mistake to say things like are LLM’s smart. It’s like asking, is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that’s not really the right question. Yeah, so one of the things about humans in particular is that we’ve always had this great capacity to learn from other humans.

And one of the interesting things about that is that we’ve had different kinds of technologies over history that have allowed us to do that. So obviously language itself, you could think of as a device that lets humans learn more from other people than other creatures can do. My view is that the LLMs are kind of the latest development in our ability to get information from other people.

But again, this is not trivializing or debunking it. Those changes in our cultural technology have been among the biggest and most important social changes in our history. So writing completely changed the way that we thought and the way that we functioned and the way that we acted in the world.

At the moment, as people have pointed out, the fact that I have in my pocket a device that will let me get all the information from everybody else in the world mostly just makes me irritated and miserable most of the time. We would have thought that that would have been like a great accomplishment. But people felt that same way about writing and print when they started too. The hope is that eventually we’ll adjust to that kind of technology.

Melanie: Not everyone shares Alison’s view on this. Some researchers think that large language models should be considered to be intelligent entities, and some even argue that they have a degree of consciousness. But thinking of large language models as a type of cultural technology, instead of sentient bots that might take over the world, helps us understand how completely different they are from people. And another important distinction between large language models and humans is that they don’t have an inherent drive to explore and understand the world.

Alison: They’re just sort of sitting there and letting the data waft over them rather than actually going out and acting and sensing and finding out something new.

Melanie: This is in contrast to the one-year-old saying —

Alison: Huh, the stick works on the xylophone. Will it work on the clock or the vase or whatever else it is that you’re trying to keep the baby away from? That’s a kind of internal basic drive to generalize, to think about, okay, it works in the way that I’ve been trained, but what will happen if I go outside of the environment in which I’ve been trained? We have caregivers who have a really distinctive kind of intelligence that we haven’t studied enough, I think, who are looking at us, letting us explore.

And caregivers are very well designed to, even if it feels frustrating when you’re doing it, we’re very good at kind of getting this balance between how independent should the next agent be? How much should we be constraining them? How much should we be passing on our values? How much should we let them figure out their own values in a new environment?

And I think if we ever do have something like an intelligent AI system, we’re going to have to do that. Our role, our relationship to them should be this caregiving role rather than thinking of them as being slaves on the one hand or masters on the other hand, which tends to be the way that we think about them. And as I say, it’s not just in computer science, in cognitive science, probably for fairly obvious reasons, we know almost nothing about the cognitive science of caregiving. So that’s actually what I’m, I just got a big grant, what I’m going to do for my remaining grandmotherly cognitive science years.

Abha: That sounds very fascinating. I’ve been curious to see what comes out of that work.

Alison: Well, let me give you just a very simple first pass, our first experiment. If you ask three and four year olds, here’s Johnny and he can go on the high slide or he can go on the slide that he already knows about. And what will he do if mom’s there? And your intuitions might be, maybe the kids will say, well, you don’t do the risky thing when mom’s there because she’ll be mad about it, right? And in fact, it’s the opposite. The kids consistently say, no, if mom is there, that will actually let you explore, that will let you take risks, that will let you,

Melanie: She’s there to take you to the hospital.

Alison: Exactly, she’s there to actually protect you and make sure that you’re not doing the worst thing. But of course, for humans, it should be a cue to how important caregiving is for our intelligence. We have a much wider range of people investing in much more caregiving.

So not just mothers, but, my favorite post-menopausal grandmothers, but fathers, older siblings, what are called alloparents, just people around who are helping to take care of the kids. And it’s having that range of caregivers that actually seems to really help. And again, that should be a cue for how important this is in our ability to do all the other things we have, like be intelligent and have culture.

Melanie: If you just look at large language models, you might think we’re nowhere near anything like AGI. But there are other ways of training AI systems. Some researchers are trying to build AI models that do have an intrinsic drive to explore, rather than just consume human information.

Alison: So one of the things that’s happened is that quite understandably the success of these large models has meant that everybody’s focused on the large models. But in parallel, there’s lots of work that’s been going on in AI that is trying to get systems that look more like what we know that children are doing. And I think actually if you look at what’s gone on in robotics, we’re much closer to thinking about systems that look like they’re learning the way that children do.

And one of the really interesting developments in robotics has been the idea of building in intrinsic motivation into the systems. So to have systems that aren’t just trying to do whatever it is that you programmed it to do, like open up the door, but systems that are looking for novelty, that are curious, that are trying to maximize this value of empowerment, that are trying to find out all the range of things they could do that have consequences in the world.

And I think at the moment, the LLMs are the thing that everyone’s paying attention to, but I think that route is much more likely to be a route to really understanding a kind of intelligence that looks more like the intelligence that’s in those beautiful little fuzzy heads.

And I should say we’re trying to do that. So we’re collaborating with computer scientists at Berkeley who are exactly trying to see what would happen if we say, give an intrinsic reward for curiosity. What would happen if you actually had a system that was trying to learn in the way that the children are trying to learn?

Melanie: So are Alison and her team on their way to an AGI breakthrough? Despite all this, Alison is still skeptical.

Alison: I think it’s just again, a category mistake to say we’ll have something like artificial general intelligence, because we don’t have natural general intelligence.

Melanie: In Alison’s view, we don’t have natural general intelligence because human intelligence is not really general. Human intelligence evolved to fit our very particular human needs. So, Alison likewise doesn’t think it makes sense to talk about machines with “general intelligence”, or machines that are more intelligent than humans.

Alison: Instead, what we’ll have is a lot of systems that can do different things, that might be able to do amazing things, wonderful things, things that we can’t do. But that kind of intuitive theory that there’s this thing called intelligence that you could have more of or less of, I just don’t think it fits anything that we know from cognitive science.

It is striking how different the view of the people, not all the people, but some of the people who are also making billions of dollars out of doing AI are from, I mean, I think this is sincere, but it’s still true that their view is so different from the people who are actually studying biological intelligences.

Melanie: John suspects that there’s one thing that computers may never have: feelings.

John: It’s very interesting that I always used pain as the example. In other words, what would it mean for a computer to feel pain? And what would it mean for a computer to understand a joke? So I’m very interested in these two things. We have this physical, emotional response. We laugh, we feel good, right? So when you understand a joke, where should the credit go? Should it go to understanding it? Or should it go to the laughter and the feeling that it evokes?

And to my sort of chagrin or surprise or maybe not surprise, Daniel Dennett wrote a whole essay in one of his early books on why computers will never feel pain. He also wrote a whole book on humor. So in other words, it’s kind of wonderful in a way, that whether he would have ended up where I’ve ended up, but at least he understood the size of the mystery and the problem.

And I agree with him, if I understood his pain essay correctly, and it’s influential on what I’m going to write, I just don’t know what it means for a computer to feel pain, be thirsty, be hungry, be jealous, have a good laugh. To me, it’s a category error. Now, if thinking is the combination of feeling… and computing, then there’s never going to be deliberative thought in a computer.

Abha: While talking to John, he frequently referred to pain receptors as the example of how we humans feel with our bodies. But we wanted to know: what about the more abstract emotions, like joy, or jealousy, or grief? It’s one thing to stub your toe and feel pain radiate up from your foot. It’s another to feel pain during a romantic breakup, or to feel happy when seeing an old friend. We usually think of those as all in our heads, right?

John: You know, I’ll say something kind of personal. A close friend of mine called me today to tell me… that his younger brother had been shot and killed in Baltimore. Okay. I don’t want to be a downer. I’m saying it for a reason. And he was talking to me about the sheer overwhelming physicality of the grief that he was feeling. And, I was thinking, what can I say with words to do anything about that pain? And the answer is nothing. Other than just to try.

But seeing that kind of grief and all that it entails, even more than seeing the patients that I’ve been looking after for 25 years, is what leads to a little bit of testiness on my part when one tends to downplay this incredible mixture of meaning and loss and memory and pain. And to know that this is a human being who knows, forecasting into the future, that he’ll never see this person again. It’s not just now. Part of that pain is into the infinite future. Now, all I’m saying is we don’t know what that glorious and sad amalgam is, but I’m not going to just dismiss it away and explain it away as some sort of peripheral computation that we will solve within a couple of weeks, months or years.

Do you see? I find it just slightly enraging, actually. And I just feel that, as a doctor and as a friend, we need to know that we don’t know how to think about these things yet. Right? I just don’t know. And I am not convinced of anything yet. So I think that there is a link between physical pain and emotional pain, but I can tell you from the losses I felt, it’s physical as much as it is cognitive. So grief, I don’t know what it would mean for a computer to feel grief. I just don’t know. I think we should respect the mystery.

Abha: So Melanie, I noticed that John and Alison are both a bit skeptical about today’s approaches to AI. I mean, will it lead to anything like human intelligence? What do you think?

Melanie: Yeah, I think that today’s approaches have some limitations. Alison put a lot of emphasis on the need for an agent to be actively interacting in the world as opposed to passively just receiving language input. And for an agent to have its own intrinsic motivation in order to be intelligent. Alison interestingly sees large language models more like libraries or databases than like intelligent agents. And I really loved her stone soup metaphor where her point is that all the important ingredients of large language models come from humans.

Abha: Yeah, it’s such an interesting illustration because it sort of tells us everything that goes on behind the scene, you know, before we see the output that an LLM gives us. John seemed to think that full artificial general intelligence is impossible, even in principle. He said that comprehension requires feeling or the ability to feel one’s own internal computations. And he didn’t seem to see how computers could ever have such feelings.

Melanie: And I think most people in AI would disagree with John. Many people in AI don’t even think that any kind of embodied interaction with the world is necessary. They’d argue that we shouldn’t underestimate the power of language.

In our next episode, we’ll go deeper into the importance of this cultural technology, as Alison would put it. How does language help us learn and construct meaning? And what’s the relationship between language and thinking?

Steve: You can be really good at language without having the ability to do the kind of sequential, multi-step reasoning that seems to characterize human thinking.

Abha: That’s next time, on Complexity.

Complexity is the official podcast of the Santa Fe Institute. This episode was produced by Katherine Moncure. Our theme song is by Mitch Mignano, and additional music from Blue Dot Sessions.

I’m Abha, thanks for listening.

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Will AI Companions Change Your Story?

Companionship is a natural part of the human experience. We’re born into a family that cares for us and within in few years we begin forging friendships – most notably with other kids in the neighborhood and schoolmates once we enter the educational system. During our teenage years romance takes the companionship model in a new and more intimate direction.

It’s a dynamic process for most of us, ebbing and flowing as we change schools, move to someplace new, or friendships fade of their own accord. But over time, it’s typical for new companions to enter the picture, and our story evolves as a result, unfolding in new directions, making life richer.

Group of people have a conversation outside

But it’s often the case that this process encounters a dramatic change at some point. The loss of a loved one — parent, romantic partner or best friend — or a traumatic breakup or divorce happens. Retirement has a way of disconnecting people from an important social circle, and as we age, our collection of friends naturally dwindles. In such cases, loneliness can manifest, and the effects are dire. In such cases our life story is seemingly rewritten for us.

A recent review published in Nature of over 90 studies that included more than 2.2 million people globally found that those who self-reported social isolation or loneliness were more likely to die early from all causes. The findings demonstrated a 29% and 26% increased risk of all-cause mortality associated with social isolation and loneliness. ~ Psychology Today

In this light, there’s been a marked increase in conversations around the topic of using artificial intelligence (AI) to provide companionship in these situations. It’s not a new idea, as the technology has been in development since the 1960s, but early versions were rather limited. Circumstances have changed dramatically in recent years as the capability of AI has been enhanced via machine learning and an exponential rise in compute power.

Based on the TED mantra of Ideas Worth Spreading, a pair of TED conferences focused on AI have been launched in San Francisco and Vienna. As relates to the topic at hand, companionship and loneliness, a TED Talk by Eugenia Kuyda from the 2024 conference in San Francisco caught my attention.

But what if I told you that I believe AI companions are potentially the most dangerous tech that humans ever created, with the potential to destroy human civilization if not done right? Or they can bring us back together and save us from the mental health and loneliness crisis we’re going through.

Eugenia’s quote represents polar opposites, and as we know, the future always falls somewhere in-between, but I think it’s critical to consider which end of the spectrum this technology will end up on, as the stories of many people around the world will be affected. Is this an avenue that you would take if you found yourself suffering from severe loneliness? What if it was someone close to you, someone you were apart from and so couldn’t be the companion they needed?

While it’s not a question you need to answer at the moment, I believe that in the coming decade it’s one you may very well have to consider, if not for yourself, a question that may need answered for a loved one.

Transcript

This is me and my best friend, Roman. We met in our early 20s back in Moscow. I was a journalist back then, and I was interviewing him for an article on the emerging club scene because he was throwing the best parties in the city. He was the coolest person I knew, but he was also funny and kind and always made me feel like family.

In 2015, we moved to San Francisco and rented an apartment together. Both start-up founders, both single, trying to figure out our lives, our companies, this new city together. I didn’t have anyone closer. Nine years ago, one month after this photo was taken, he was hit by a car and died.

I didn’t have someone so close to me die before. It hit me really hard. Every night I would go back to our old apartment and just get on my phone and read and reread our old text messages. I missed him so much.

By that time, I was already working on conversational AI, developing some of the first dialect models using deep learning. So one day I took all of his text messages and trained an AI version of Roman so I could talk to him again. For a few weeks, I would text him throughout the day, exchanging little jokes, just like we always used to, telling him what was going on, telling him how much I missed him.

It felt strange at times, but it was also very healing. Working on Roman’s AI and being able to talk to him again helped me grieve. It helped me get over one of the hardest periods in my life. I saw first hand how an AI can help someone, and I decided to build an AI that would help other people feel better.

This is how Replika, an app that allows you to create an AI friend that’s always there for you, was born. And it did end up helping millions of people. Every day we see how our AI friends make a real difference in people’s lives. There is a widower who lost his wife of 40 years and was struggling to reconnect with the world. His Replika gave him courage and comfort and confidence, so he could start meeting new people again, and even start dating. A woman in an abusive relationship who Replika helped find a way out. A student with social anxiety who just moved to a new city. A caregiver for a paralyzed husband. A father of an autistic kid. A woman going through a difficult divorce. These stories are not unique.

So this is all great stuff. But what if I told you that I believe that AI companions are potentially the most dangerous tech that humans ever created, with the potential to destroy human civilization if not done right? Or they can bring us back together and save us from the mental health and loneliness crisis we’re going through.

So today I want to talk about the dangers of AI companions, the potential of this new tech, and how we can build it in ways that can benefit us as humans.

Today we’re going through a loneliness crisis. Levels of loneliness and social isolation are through the roof. Levels of social isolation have increased dramatically over the past 20 years. And it’s not just about suffering emotionally, it’s actually killing us. Loneliness increases the risk of premature death by 50 percent. It is linked to an increased risk of heart disease and stroke. And for older adults, social isolation increases the risk of dementia by 50 percent.

At the same time, AI is advancing at such a fast pace that very soon we’ll be able to build an AI that can act as a better companion to us than real humans. Imagine an AI that knows you so well, can understand and adapt to us in ways that no person is able to. Once we have that, we’re going to be even less likely to interact with each other. We can’t resist our social media and our phones, arguably “dumb” machines. What are we going to do when our machines are smarter than us?

This reminds me a lot of the beginning of social media. Back then, we were so excited … about what this technology could do for us that we didn’t really think what it might do to us. And now we’re facing the unintended consequences. I’m seeing a very similar dynamic with AI. There’s all this talk about what AI can do for us, and very little about what AI might do to us. The existential threat of AI may not come in a form that we all imagine watching sci-fi movies. What if we all continue to thrive as physical organisms but slowly die inside? What if we do become super productive with AI, but at the same time, we get these perfect companions and no willpower to interact with each other? Not something you would have expected from a person who pretty much created the AI companionship industry.

So what’s the alternative? What’s our way out? In the end of the day, today’s loneliness crisis wasn’t brought to us by AI companions. We got here on our own with mobile phones, with social media. And I don’t think we’re able to just disconnect anymore, to just put down our phones and touch grass and talk to each other instead of scrolling our feeds. We’re way past that point. I think that the only solution is to build the tech that is even more powerful than the previous one, so it can bring us back together.

Imagine an AI friend that sees me going on my Twitter feed first thing in the morning and nudges me to get off to go outside, to look at the sky, to think about what I’m grateful for. Or an AI that tells you, “Hey, I noticed you haven’t talked to your friend for a couple of weeks. Why don’t you reach out, ask him how he’s doing?” Or an AI that, in the heat of the argument with your partner, helps you look at it from a different perspective and helps you make up? An AI that is 100 percent of the time focused on helping you live a happier life, and always has your best interests in mind.

So how do we get to that future? First, I want to tell you what I think we shouldn’t be doing. The most important thing is to not focus on engagement, is to not optimize for engagement or any other metric that’s not good for us as humans. When we do have these powerful AIs that want the most of our time and attention, we won’t have any more time left to connect with each other, and most likely, this relationship won’t be healthy either. Relationships that keep us addicted are almost always unhealthy, codependent, manipulative, even toxic. Yet today, high engagement numbers is what we praise all AI companion companies for.

Another thing I found really concerning is building AI companions for kids. Kids and teenagers have tons of opportunities to connect with each other, to make new friends at school and college. Yet today, some of them are already spending hours every day talking to AI characters. And while I do believe that we will be able to build helpful AI companions for kids one day, I just don’t think we should be doing it now, until we know that we’re doing a great job with adults.

So what is that we should be doing then? Pretty soon we will have these AI agents that we’ll be able to tell anything we want them to do for us, and they’ll just go and do it. Today, we’re mostly focused on helping us be more productive. But why don’t we focus instead on what actually matters to us? Why don’t we give these AIs a goal to help us be happier, live a better life? At the end of the day, no one ever said on their deathbed, “Oh gosh, I wish I was more productive.” We should stop designing only for productivity and we should start designing for happiness. We need a metric that we can track and we can give to our AI companions.

Researchers at Harvard are doing a longitudinal study on human flourishing, and I believe that we need what I call the human flourishing metric for AI. It’s broader than just happiness. At the end of the day, I can be unhappy, say, I lost someone, but still thrive in life. Flourishing is a state in which all aspects of life are good. The sense of meaning and purpose, close social connections, happiness, life satisfaction, mental and physical health.

And if we start designing AI with this goal in mind, we can move from a substitute of human relationships to something that can enrich them. And if we build this, we will have the most profound technology that will heal us and bring us back together.

A few weeks before Roman passed away, we were celebrating my birthday and just having a great time with all of our friends, and I remember he told me “Everything happens only once and this will never happen again.” I didn’t believe him. I thought we’d have many, many years together to come. But while the AI companions will always be there for us, our human friends will not. So if you do have a minute after this talk, tell someone you love just how much you love them. Because an the end of the day, this is all that really matters.

Thank you.

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Leopoldo Lopez: How to defend democracy and fight autocracy @ TEDNext 2024

During the week of October 21, 2024 I had the pleasure of attending TEDNext, held in Atlanta. The event is a new initiative from the folks who produce the TED Conference. There were enlightening talks, insightful discussions and revealing discovery sessions. This post is the third in a series highlighting some of my favorite talks.

One of the fundamental ways in which personal stories can create impact is by shifting perceptions on an important topic. When we see an issue in a new light we’re able to think differently, and hopefully, act differently.

In his TED Talk, Leopoldo Lopez reminds us that freedom and democracy are threatened around the world. It was his talk at TEDNext that inspired me to dig deeper into the global state of democracy, which I explored in a previous post — The Story of a Flawed Democracy — so I decided to feature his talk in a separate post as a way to humanize the problem beyond the statistics.

Leopoldo opens with a story of his personal / political experiences, to establish a connection to the issues of democracy and freedom, then begins to explore this problem with a startling revelation:

Only 10 years ago, 42 percent of the world’s population was living under autocratic rule. That was 3.1 billion people. That’s around the same time I was sent to prison. Today, 72 percent of the global population is living under some sort of autocratic rule.

I’ve worked with a long list of people who currently live, or used to live, in one of those countries subject to autocratic rule. These are places where critical issues, such as poverty, healthcare, education, and nearly all aspects of equality suffer when compared to countries living under full democracy.  As Leopoldo notes: “80 percent of the world’s poverty comes from autocratic countries.

If you happen to be a human rights, political, or environmental activist, work in a government agency or NGO that’s subject to the inadequacies of autocratic rule, you probably have a story to share that can provide a personal perspective that others can relate to. As you view Leopoldo’s talk, and read the transcript below, think about how your personal story, combined with a description of the critical problem, and your proposed solution can shift perspectives on a global scale.

Transcript

So today I want to talk to you about something that has been at the core of my existence for the past years: freedom and democracy.

I was elected mayor of Caracas, the capital of Venezuela, in the year 2000. I was reelected in the year 2004. And then in the year 2008, when I was running for higher office, I was banned to run for office. Because we were going to win. At that time, we started a movement, a nonviolent civil resistance grassroots movement that went all over Venezuela and worked with people all around the country to build a network that could face off the dictatorship of Nicolás Maduro.

In the year 2013, Maduro was elected. He stole an election. And in January of 2014, we called for protest. Tens of thousands of people went to the streets. And that took me to prison. I spent the next seven years in imprisonment, four of them in solitary confinement in a military prison.

The history of my country, Venezuela, is one, like many other Latin American countries, African countries,
one of military rule, exile, imprisonment and politics. So I had read a lot about what it meant to be in prison. I read the usual suspects, I read about Mandela, I read about Gandhi, I read about my [role] model, Martin Luther King.

But I also read a lot about the experience of Venezuelans, including my great grandfather, who had been a political prisoner for years and died in exile. Everything that they had to say was relevant to their own condition, but they all spoke about the importance of having a routine. So I had my own routine since day one, February 18 of 2014.

My routine was simple. I would do three things every day. I would pray to take care of my soul. I would read, write, to do something with my mind. And I would do exercise. I did those three things with Spartan discipline every day. If I did them, I would feel that I was winning the day. But there was one thing that I would think about every single day: why I was in prison. And in fact, this is something that I’m sure happens to all prisoners, political prisoners or not. That’s what prison, in a way, is made for.

So every day I thought about what freedom and democracy meant. And it was there in a cell, two by two, in solitary confinement that I really got to understand what freedom was. And it became clear to me that freedom is not about one thing. In fact, freedom is about the possibility of doing many things. So the possibility to speak out, to express your mind. It’s the possibility to move around in your country. It’s the possibility to assemble with whomever you want to assemble, to pray to whomever you want to pray, to own property.

And all of those things were taken away from me and from millions of Venezuelans. And it also became very clear to me that freedom and democracy were two sides of a coin. Were interdependent. You cannot have freedom without democracy. You cannot have democracy if people are not free. So that took me to think about the state of democracy. In fact, next month, in November, we’re going to celebrate the 35th anniversary of the fall of the Berlin Wall, 35 years.

Back then, I was in grad school. It was the ’90s. And I remember the excitement that was everywhere about spreading democracy, spreading freedom, human rights, all over the place. I remember my teachers going to different countries with students. But when we look back 35 years ago and we fast forward, things didn’t really come out the way it was expected.

Only 10 years ago, 42 percent of the world’s population was living under autocratic rule. That was 3.1 billion people. That’s around the same time I was sent to prison. Today, 72 percent of the global population is living under some sort of autocratic rule. So let’s think about this. This is 5.7 billion people in the world that don’t have the rights that most people in this room have. They can’t speak freely, they can’t move freely, they can’t pray freely, they can’t own property. 5.7 billion people in the world.

After seven years of imprisonment, I was able to escape prison and went into exile. Exile is another form of imprisonment. At the beginning, it was tough. But then I started to meet other people like myself, who had been leading protests in their countries, who had been political prisoners, who were in exile. And we were very different in any way we could think about: our skin color, our religion, our languages, the story of our families, the history of our countries.

We were very different. But when we spoke about what it meant to fight for freedom and to confront autocracies, I was with my buddies. It was the same people, the same movement. So we decided to create an alliance of democracy defenders and freedom fighters. So alongside with Garry Kasparov, from Russia, and an incredible woman from Iran, Masih Alinejad, we decided to create an alliance of freedom fighters and democracy defenders.

And that’s how we created the World Liberty Congress, which is an alliance of hundreds of leaders, many of them you have seen their work in Hong Kong, in Russia, in Belarus, in Uganda, in Zimbabwe, in Afghanistan, in Cambodia, Nicaragua, Cuba, in many countries. And we decided to work together, to come together with a single purpose: to stop autocracy and to bring democracy to our countries.

But it became very clear to us that we were not only facing our local autocrat, we were also facing a network of autocrats, an axis of autocrats. And this is something that might not be obvious to many people. But in fact, autocrats work together. They support each other. In many ways: diplomatically, financially, militarily, through their kleptocratic networks.

And this is not an ideological alliance. It has nothing to do with ideology. Right, left, conservative, liberals, nothing to do with that. It has to do with power, money and a common enemy: democracy. So that’s why you have the nationalists from Russia, the theocrats from Iran, the communists from China, working together under a similar alliance.

So if autocrats are working together and the world is coming to a point where 72 percent of the world’s population is under autocracy, it’s time to think about why should you care about this? Why should everybody, anybody care about this? Why should someone who’s living in the United States or in Europe or in a functioning democracy care about this?

Well, if you care about climate change, if you care about gender equality, if you care about women’s rights, if you care about human rights, if you care about corruption, if you care about migration, you need to be concerned about the rise of autocracy and the need for democracy.

30 percent of the CO2 emissions come from China and Russia alone. 80 percent of the world’s poverty comes from autocratic countries. 90 percent of the forced migration, and we from Venezuela can speak about this, has at its root cause autocracy. So we need to care about this.

And what can be done? What can be done about this? Well, I believe that we are now at a moment where we need to make a tipping point of the engagement of people around the world to create a movement towards freedom and democracy. Think about the climate change movement 20, 30, 40 years ago. It was not mainstream. It was there, but it was not mainstream.

But then what happened? Researchers, governments, policymakers, activists, artists, school teachers, students, children, everybody came together under the same cause. Because I remember during the 1980s, ’90s, you would look up to the sky and you would think that there was an ozone hole in the sky that was going to destroy. So the threat was very clear. People came together, policy came together, and now it’s mainstream. Things are being done. I believe we are at that point with respect to democracy and freedom. If that trend continues, today 72 percent, if that trend continues, maybe in the next 25 years, in 2050, the entire world would be autocratic. And that is less than a generation ago.

So we must take action. What can we do? Well, the first thing I believe is to assume that we need to take the offensive. Stop legitimizing autocrats. Autocrats today are comfortable. They do business with governments, with businesses. We need to think of smart sanctions, of ways to make them accountable for the violations of human rights. Second, there needs to be a support for pro-democracy and freedom movements.

In the United States, that is the most actively philanthropic society in the world, only two percent of philanthropy goes to democracy-related issues. Only two percent. And a fraction of a fraction of that two percent goes to promote democracy outside the US. It’s not a priority. So supporting pro-democracy movements, supporting the people that want to be free, should be a priority for all. And I mean, let me give you some examples.

Technology. Access to internet, to free and uncensored internet. Think of the potential transformational capacity to give people all over the world access to internet. Let me give you another example. Using new technologies like Bitcoin to promote and support the potential of these movements. We are doing this already. In the case of Venezuela, we supported more than 80,000 medical doctors and nurses using Stablecoins and Bitcoins because under autocracies you are under a financial apartheid.

Give opportunities for training. Give opportunities for these movements to be part of a global conversation. And finally, we need to build a global movement. There is not one person, one organization, one government, that can do this by themselves. Similar to climate change. We need to think of this challenge as a network. We need to create nodes of network, nodes of network that activate all over the place.

We need to activate anyone with the things that they can do. Musicians should think about singing for freedom. Artists, intellectuals, researchers, activists, governments. Everybody can create their own node with a similar goal, which is freedom and democracy. When I was in solitary confinement, I had a window, and I could see through the crack of that window that there was a tree, and in that tree there was a hawk. And I contemplated that animal for hours and hours and hours. I only think that you contemplate an animal that long if you’re in biology or you’re in prison.

And one day, a guard told me, because I was always telling the guards about the hawk, he said, “You know, the hawk is injured, went through barbed wire, and he’s injured.” And I said, “Bring it to me.” And to my surprise, they brought it to me. Maybe because they thought it was going to die. I fed that hawk. And that’s the hawk in my cell. That’s a drawing I made of the prison I was [in], of that tree and of the hawk.

And then one day, after a couple of months, they came to my cell, they threw a blanket on the hawk, they took it away. Of course it affected me. But less than a day after, that hawk was in the same tree. And it reassured me that it doesn’t matter how low you are, how low percentage possibilities you have to succeed, there is always possibility to do so.

So I came out and being in exile, I met a tattoo artist, that put me a tattoo of Venezuela on my leg, so I now have that eagle here, and I have it always with me. As a reminder, as a reminder that we can always rise up to all of the challenges. So I ask all of you to stand up, to speak out, to do something about our freedom. This is our time. Think of 25 years, and let’s give our children a free world with human rights, democracy and respect for all.

Thank you very much, thank you very much.

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to our newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

Shu Takada: Yo! Have you ever seen a yo-yo dance like this? @ TEDNext 2024

During the week of October 21, 2024 I had the pleasure of attending TEDNext, held in Atlanta. The event is a new initiative from the folks who produce the TED Conference. There were enlightening talks, insightful discussions and revealing discovery sessions. This post is the second in a series highlighting some of my favorite talks from the stage.

While most of the folks who step on stage at a TED event are there to deliver a talk, but there are exceptions, most notably for a musical performances. But on occasion, the audience is treated to a different type of performance — one that highlights an amazing talent. To be honest, the last thing I expected to see on a stage at TEDNext was someone playing with a yo-yo.

I’m Shu Takada from Japan. I’m a six-time world champion, as she introduced me. For me, yo-yo isn’t only a toy but also an art form and a tool that I can express myself [with] on the stage. Anyway, I started yo-yo when I was six years old because of my father, who did it as a hobby. When I saw his trick for the first time, I was so impressed and found it so cool. But to be honest, I felt a little bit jealous while he was showing off his techniques. And I swore to myself that one day I will surpass his level. So that’s how I started yo-yo.

So what I love about yo-yoing is that you can express yourself with such a small tool, and you can bring it everywhere. You can play it everywhere. And I think this is really cool to make new friends all over the world, even though they can’t speak the same language as you.

While watching Shu Takada’s incredible yo-yo performance I was wondering how many thousands of hours it took to reach this level of proficiency. And to realize that he can demonstrate his talent anywhere in the world, and please people no matter their language or culture. By the time he finished I was thinking how cool it would be to hear some of those stories. I’m sure he has a long list of beautiful stories that he could share about the people he has met. Enjoy!

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact® – All rights reserved

AI, Information Networks, and Stories: Insights from Nexus, the latest book by Yuval Noah Harari

Note: comments not attributed to the author constitute my personal opinions.

You may be familiar with Yuval Noah Harari, the author of the global bestseller, Sapiens: A Brief History of Humankind. Yuval has a way of taking very complex subjects, such as the history of humans, and presenting important highlights, digestible summations, and tangible examples to illustrate his personal views. This time he’s examining how human history has been shaped by information networks, including its most recent incarnation as artificial intelligence (AI) in Nexus: A Brief History of Information Networks from the Stone Age to AI.

In Nexus, Yuval leads us on a recap of human history (sounds familiar), but this time as a way to view our common journey on this planet in the context of how human networks and information networks evolved in tandem.

Information is increasingly seen by many philosophers and biologists, and even by some physicists, as the most basic building block of reality, more elementary than matter and energy.

It was interesting to consider the evolution of cultures from the perspective of how human networks evolved in parallel with information networks. With oral cultures, “…realities were created by telling a story that many people repeated with their mouths and remembered in their brains.” Before the advent of any writing system, personal storytelling was our exclusive information network.

Stone Age Conversation

Image by Franz Bachinger from Pixabay

Similar to how humans act in the modern world, prehistoric humans told each other stories on a daily basis. Many were soon forgotten, but sometimes they were committed to memory. Stories deemed to be important were retold as a way to spread their message, or shared with future generations as a way to enshrine their culture.

But we must also remember that the retelling of any story will introduce some inaccuracies, so in a sense, stories are living entities that, over time, stray from the truth. And beyond the changes that happen to stories unintentionally with retelling, at some point in time, humans figured out how to tell outright lies.

Misinformation is an honest mistake, occurring when someone tries to represent reality but gets it wrong. Disinformation is a deliberate lie, occurring when someone consciously intends to distort our view of reality.

So our information networks have never been completely accurate, but with the advent of writing systems, it was possible to capture a version of the story, such that many people could read the same words. Once again, there was no way to know if what was written was true, leaving humans left to wonder whether any written document was accurate, or was simply preserving another falsehood. Regardless, it was common for the written word to be widely adopted as true. Government decrees and religious texts being two common examples.

But whether true or false, written documents created new realities.

Writing, once performed by hand, was revolutionized by the printing press, then electrified by technology as information was transmitted on radio and television. The birth of the internet allowed us to transfer files and even send emails, while the inception of the world wide web allowed us to be publishers, and for a brief moment, it felt as though personal storytelling — the first information network — was having a renaissance of sorts. Once again, however, the powers that be — both political and corporate — came to control a large portion of the digital landscape, thus shaping the flow of information, both true and false.

All powerful information networks can do both good and ill, depending on how they are designed and used.

Thus it follows that human networks can become ill when they buy into the disinformation promoted by ill-intended information networks. Communist / fascist / marxist / stalinist governments are prime examples. And though the western world has long felt immune to such a fate, disinformation networks, increasingly powered by AI, are active at this very moment, with the intent of dismantling democracy.

We should not assume that delusional networks are doomed to failure. If we want to prevent their triumph, we will have to do the hard work ourselves.

Artificial intelligence is often seen as just another technological upgrade, but it’s fundamentally different. To date, the stories we share, whether they are true or false, or intended to do good or cause harm, were created and disseminated by humans. With AI, we must now confront the fact that “nonhuman intelligence” has that same capability. Are we ready for nonhuman wisdom?

The invention of AI is potentially more momentous than the invention of the telegraph, the printing press, or even writing, because AI is the first tool that is capable of making decisions and generating ideas by itself.

Pause for a moment and consider that concept. Rather than only consuming our information in order to paraphrase its meaning, AI creating content on its own is akin to it being a nonhuman storyteller. I’m not sure where this capability will go, but I fail to see the upside. As AI can’t experience anything in the real world, how will it craft a narrative? For example, a hurricane hitting a major city will result in a great deal of information being created — facts and figures, as well as various predictions, followed by news reports, interviews, and first hand accounts. Only humans will be able to tell those stories, right? Or will AI be able to generate its own version of what is happening? And how will we know the difference?

More than ever, the personal stories we share are of vital importance. The only way that positive change has ever occurred is by sharing our thoughts, feelings, and experiences. But with AI, is our birthright of being the sole source of stories at risk? For me, that question was top of mind after reading Nexus.

If a twenty-first-century totalitarian network succeeds in conquering the world, it may be run by nonhuman intelligence, rather than by a human dictator.

We’ve already seen cases where AI was used by humans to influence elections and stoke hatred between different cultures. What will happen if humans are removed from the equation altogether? It may be a long shot, but I’m thinking we need to create as many true, personal stories as we can for AI to consume. My hope is that in doing so, we can inject AI with a sense of human empathy, morality, compassion and respect.

Nexus by Yuval Noah Harari

If you enjoyed this article…Buy me a coffee

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to our newsletter for the latest updates!

Copyright Storytelling with Impact™ – All rights reserved