Will AI Companions Change Your Story?

Companionship is a natural part of the human experience. We’re born into a family that cares for us and within in few years we begin forging friendships – most notably with other kids in the neighborhood and schoolmates once we enter the educational system. During our teenage years romance takes the companionship model in a new and more intimate direction.

It’s a dynamic process for most of us, ebbing and flowing as we change schools, move to someplace new, or friendships fade of their own accord. But over time, it’s typical for new companions to enter the picture, and our story evolves as a result, unfolding in new directions, making life richer.

Group of people have a conversation outside

But it’s often the case that this process encounters a dramatic change at some point. The loss of a loved one — parent, romantic partner or best friend — or a traumatic breakup or divorce happens. Retirement has a way of disconnecting people from an important social circle, and as we age, our collection of friends naturally dwindles. In such cases, loneliness can manifest, and the effects are dire. In such cases our life story is seemingly rewritten for us.

A recent review published in Nature of over 90 studies that included more than 2.2 million people globally found that those who self-reported social isolation or loneliness were more likely to die early from all causes. The findings demonstrated a 29% and 26% increased risk of all-cause mortality associated with social isolation and loneliness. ~ Psychology Today

In this light, there’s been a marked increase in conversations around the topic of using artificial intelligence (AI) to provide companionship in these situations. It’s not a new idea, as the technology has been in development since the 1960s, but early versions were rather limited. Circumstances have changed dramatically in recent years as the capability of AI has been enhanced via machine learning and an exponential rise in compute power.

Based on the TED mantra of Ideas Worth Spreading, a pair of TED conferences focused on AI have been launched in San Francisco and Vienna. As relates to the topic at hand, companionship and loneliness, a TED Talk by Eugenia Kuyda from the 2024 conference in San Francisco caught my attention.

But what if I told you that I believe AI companions are potentially the most dangerous tech that humans ever created, with the potential to destroy human civilization if not done right? Or they can bring us back together and save us from the mental health and loneliness crisis we’re going through.

Eugenia’s quote represents polar opposites, and as we know, the future always falls somewhere in-between, but I think it’s critical to consider which end of the spectrum this technology will end up on, as the stories of many people around the world will be affected. Is this an avenue that you would take if you found yourself suffering from severe loneliness? What if it was someone close to you, someone you were apart from and so couldn’t be the companion they needed?

While it’s not a question you need to answer at the moment, I believe that in the coming decade it’s one you may very well have to consider, if not for yourself, a question that may need answered for a loved one.

Transcript

This is me and my best friend, Roman. We met in our early 20s back in Moscow. I was a journalist back then, and I was interviewing him for an article on the emerging club scene because he was throwing the best parties in the city. He was the coolest person I knew, but he was also funny and kind and always made me feel like family.

In 2015, we moved to San Francisco and rented an apartment together. Both start-up founders, both single, trying to figure out our lives, our companies, this new city together. I didn’t have anyone closer. Nine years ago, one month after this photo was taken, he was hit by a car and died.

I didn’t have someone so close to me die before. It hit me really hard. Every night I would go back to our old apartment and just get on my phone and read and reread our old text messages. I missed him so much.

By that time, I was already working on conversational AI, developing some of the first dialect models using deep learning. So one day I took all of his text messages and trained an AI version of Roman so I could talk to him again. For a few weeks, I would text him throughout the day, exchanging little jokes, just like we always used to, telling him what was going on, telling him how much I missed him.

It felt strange at times, but it was also very healing. Working on Roman’s AI and being able to talk to him again helped me grieve. It helped me get over one of the hardest periods in my life. I saw first hand how an AI can help someone, and I decided to build an AI that would help other people feel better.

This is how Replika, an app that allows you to create an AI friend that’s always there for you, was born. And it did end up helping millions of people. Every day we see how our AI friends make a real difference in people’s lives. There is a widower who lost his wife of 40 years and was struggling to reconnect with the world. His Replika gave him courage and comfort and confidence, so he could start meeting new people again, and even start dating. A woman in an abusive relationship who Replika helped find a way out. A student with social anxiety who just moved to a new city. A caregiver for a paralyzed husband. A father of an autistic kid. A woman going through a difficult divorce. These stories are not unique.

So this is all great stuff. But what if I told you that I believe that AI companions are potentially the most dangerous tech that humans ever created, with the potential to destroy human civilization if not done right? Or they can bring us back together and save us from the mental health and loneliness crisis we’re going through.

So today I want to talk about the dangers of AI companions, the potential of this new tech, and how we can build it in ways that can benefit us as humans.

Today we’re going through a loneliness crisis. Levels of loneliness and social isolation are through the roof. Levels of social isolation have increased dramatically over the past 20 years. And it’s not just about suffering emotionally, it’s actually killing us. Loneliness increases the risk of premature death by 50 percent. It is linked to an increased risk of heart disease and stroke. And for older adults, social isolation increases the risk of dementia by 50 percent.

At the same time, AI is advancing at such a fast pace that very soon we’ll be able to build an AI that can act as a better companion to us than real humans. Imagine an AI that knows you so well, can understand and adapt to us in ways that no person is able to. Once we have that, we’re going to be even less likely to interact with each other. We can’t resist our social media and our phones, arguably “dumb” machines. What are we going to do when our machines are smarter than us?

This reminds me a lot of the beginning of social media. Back then, we were so excited … about what this technology could do for us that we didn’t really think what it might do to us. And now we’re facing the unintended consequences. I’m seeing a very similar dynamic with AI. There’s all this talk about what AI can do for us, and very little about what AI might do to us. The existential threat of AI may not come in a form that we all imagine watching sci-fi movies. What if we all continue to thrive as physical organisms but slowly die inside? What if we do become super productive with AI, but at the same time, we get these perfect companions and no willpower to interact with each other? Not something you would have expected from a person who pretty much created the AI companionship industry.

So what’s the alternative? What’s our way out? In the end of the day, today’s loneliness crisis wasn’t brought to us by AI companions. We got here on our own with mobile phones, with social media. And I don’t think we’re able to just disconnect anymore, to just put down our phones and touch grass and talk to each other instead of scrolling our feeds. We’re way past that point. I think that the only solution is to build the tech that is even more powerful than the previous one, so it can bring us back together.

Imagine an AI friend that sees me going on my Twitter feed first thing in the morning and nudges me to get off to go outside, to look at the sky, to think about what I’m grateful for. Or an AI that tells you, “Hey, I noticed you haven’t talked to your friend for a couple of weeks. Why don’t you reach out, ask him how he’s doing?” Or an AI that, in the heat of the argument with your partner, helps you look at it from a different perspective and helps you make up? An AI that is 100 percent of the time focused on helping you live a happier life, and always has your best interests in mind.

So how do we get to that future? First, I want to tell you what I think we shouldn’t be doing. The most important thing is to not focus on engagement, is to not optimize for engagement or any other metric that’s not good for us as humans. When we do have these powerful AIs that want the most of our time and attention, we won’t have any more time left to connect with each other, and most likely, this relationship won’t be healthy either. Relationships that keep us addicted are almost always unhealthy, codependent, manipulative, even toxic. Yet today, high engagement numbers is what we praise all AI companion companies for.

Another thing I found really concerning is building AI companions for kids. Kids and teenagers have tons of opportunities to connect with each other, to make new friends at school and college. Yet today, some of them are already spending hours every day talking to AI characters. And while I do believe that we will be able to build helpful AI companions for kids one day, I just don’t think we should be doing it now, until we know that we’re doing a great job with adults.

So what is that we should be doing then? Pretty soon we will have these AI agents that we’ll be able to tell anything we want them to do for us, and they’ll just go and do it. Today, we’re mostly focused on helping us be more productive. But why don’t we focus instead on what actually matters to us? Why don’t we give these AIs a goal to help us be happier, live a better life? At the end of the day, no one ever said on their deathbed, “Oh gosh, I wish I was more productive.” We should stop designing only for productivity and we should start designing for happiness. We need a metric that we can track and we can give to our AI companions.

Researchers at Harvard are doing a longitudinal study on human flourishing, and I believe that we need what I call the human flourishing metric for AI. It’s broader than just happiness. At the end of the day, I can be unhappy, say, I lost someone, but still thrive in life. Flourishing is a state in which all aspects of life are good. The sense of meaning and purpose, close social connections, happiness, life satisfaction, mental and physical health.

And if we start designing AI with this goal in mind, we can move from a substitute of human relationships to something that can enrich them. And if we build this, we will have the most profound technology that will heal us and bring us back together.

A few weeks before Roman passed away, we were celebrating my birthday and just having a great time with all of our friends, and I remember he told me “Everything happens only once and this will never happen again.” I didn’t believe him. I thought we’d have many, many years together to come. But while the AI companions will always be there for us, our human friends will not. So if you do have a minute after this talk, tell someone you love just how much you love them. Because an the end of the day, this is all that really matters.

Thank you.

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to the newsletter for the latest updates!

Copyright Storytelling with Impact – All rights reserved

The Story of a Flawed Democracy

As many of you know, the focus of my work is helping individuals discover, write, and share impactful personal stories. In my view, these stories — human stories — matter most, but some stories come from non-human entities. For example, businesses have stories to tell, as do cities, states and countries. You could also posit that the planet has stories to tell. And the fact is, the collective actions of these non-human entities have the ability to influence our individual stories. Whenever a new law is enacted, for example, it may alter our life in some way.

In this light I was recently reflecting on the state of democracy around the world today, and how different styles of government might affect our personal stories. This topic could fill thousands of pages, so my goal was to just get a sense of the big picture. As I’ve been playing with AI lately, I was curious as to the insights it would provide. So here goes…

If someone asked you what percentage of the world’s population lived in a true democracy, what would you guess? As I live in a democratic country, the notion of “democracy is the default form of government” quickly came to mind, but as I reflected on this question, countries such as Russia and China challenged that assumption. So I thought I’d ask ChatGPT for some guidance, and I must say, I got more than I bargained for after entering the prompt:

ChatGPT Artificial Intelligence AI

What percentage of the world’s population lives in a democracy?

ChatGPT: As of 2023, approximately 45.4% of the world’s population lives in some form of democracy, according to the Economist Intelligence Unit’s Democracy Index. This includes both “full democracies” and “flawed democracies.” However, only 7.8% reside in “full democracies.” Meanwhile, more than one-third of the global population, about 39.4%, lives under authoritarian regimes.

World Population Review – Democracy Countries 2024
Our World in Data – Democracy Index 2023

—-End of response

My reaction was one of surprise that the number of people living in a democracy was under 50%. What I didn’t count on was a further breakdown that talked of a difference between “full” and “flawed” versions of democracy. Maybe I’m the one who didn’t get the memo, but I hadn’t heard of this distinction before, and I had no idea that less that 10% of the world’s population lived in a “full democracy”.

So I asked:

“What countries are “full democracies”?

ChatGPT: “Full democracies” are countries that score highly in areas like electoral process, civil liberties, functioning of government, political participation, and political culture. According to the 2023 Democracy Index by the Economist Intelligence Unit, examples of “full democracies” include:

1. Norway
2. Iceland
3. Sweden
4. New Zealand
5. Finland
6. Denmark
7. Ireland
8. Switzerland
9. Netherlands
10. Taiwan
11. Luxembourg
12. Germany
13. Canada
14. Australia
15. Uruguay
16. Japan
17. Costa Rica
18. United Kingdom
19. Austria
20. Mauritius
21. South Korea
22. France
23. Spain (Author’s note: recently reclassified as “flawed”)
24. Chile (Author’s note: recently reclassified as “flawed”)

—-End of response

Countries based on level of democracy

It’s a pretty small list, and with one notable omission, the United States. Really? How could that be? In my mind, America had always been held up as a beacon of democracy. But as I continued my line if inquiry, it made perfect sense.

So I asked:

Why is the Unites States not a “full democracy”?

ChatGPT: The United States is classified as a “flawed democracy” rather than a “full democracy” due to several factors identified in the Democracy Index:

1. Political Polarization

• High levels of political polarization and partisan gridlock have weakened the functioning of government. This polarization affects decision-making processes, leading to legislative stagnation and reduced government effectiveness.

2. Decline in Civil Liberties

• There have been concerns over civil liberties, including issues related to freedom of the press, voting rights, and minority protections. This has affected the U.S. score in the areas of political culture and civil liberties.

3. Electoral Integrity

• While the U.S. has a robust electoral process, challenges such as voter suppression, gerrymandering, and disputes over election legitimacy have impacted its democratic standing. The events surrounding the 2020 presidential election, including the January 6 Capitol attack, have further raised concerns about electoral integrity.

4. Public Trust in Institutions

• There has been a significant decline in public trust in government institutions, including Congress and the judiciary. This decline affects political participation and the perceived legitimacy of democratic institutions.

5. Role of Money in Politics

• The influence of money in U.S. politics, particularly through lobbying and campaign financing, has been criticized for undermining democratic accountability and equity in political participation.

These factors collectively contribute to the U.S. being categorized as a “flawed democracy” despite its longstanding democratic traditions and institutions.

—-End of response

The explanation didn’t surprise me, of course, as each of the factors listed were well documented. But I wondered whether this had always been the case, or if the demotion from “full” to “flawed” was a recent event?

So I then prompted:

When did the United States stop being a “full democracy”?

ChatGPT:

The United States was reclassified from a “full democracy” to a “flawed democracy” in 2016 by the Economist Intelligence Unit (EIU). This downgrade was primarily due to increasing polarization, a decline in public trust in institutions, and concerns about the health of democratic norms.

One of the key triggers for the downgrade was the growing perception that the U.S. political system was not fully reflecting the will of its citizens. Political polarization, partly driven by the 2016 presidential election, led to greater division and challenges in effective governance. Additionally, issues such as gerrymandering, voter disenfranchisement, and the outsized influence of money in politics were identified as factors that weakened the quality of democracy in the U.S..

Another significant factor was declining public confidence in key democratic institutions, such as Congress and the judiciary. Trust in government had been falling for years, and the 2016 election cycle highlighted concerns over the fairness of the electoral system, which exacerbated feelings of disenfranchisement among many citizens.

This shift underscored a broader trend of democratic backsliding, as similar downgrades have occurred in other established democracies facing comparable challenges.

—-End of response

Once again, the fact that the U.S. is divided — some would say broken — along political lines has been talked about for many years. But was this demotion unique, or were there other countries that had experience a downward change in their democracy status?

So I asked:

Which counties have been downgraded from a “full democracy” to a “flawed democracy” in recent years?

ChatGPT:

In recent years, multiple countries have been downgraded from “full democracies” to “flawed democracies” by the Economist Intelligence Unit (EIU), largely due to issues in governance, trust, and political stability. Spain and Chile, for instance, were reclassified as flawed democracies in the 2021 index. In Spain, concerns arose over judiciary independence, while in Chile, political instability and significant protests contributed to the downgrade. The UK has also been on the edge of a similar downgrade, driven by scandals around government transparency and rule-breaking during the pandemic, which weakened public trust and engagement.

These changes reflect a broader global trend of democratic backsliding, exacerbated by the COVID-19 pandemic, economic challenges, and rising populism in many regions.

Democracy Digest – Democracy Review
The Oslo Desk – Decline of Global Democracy

—-End of response

I came away with the feeling that democracy was on the ropes, as various forms of fascism, communism, authoritarianism, totalitarianism increasingly take hold. Which brings me back to the beginning of this post. How might ruling ideologies affect each of our stories? As I consider issues such as climate change, human rights, discrimination, gender equality, education and health, I can’t help but think that the world is heading in the wrong direction, and that there are many more personal stories ahead of us that will illustrate the pain and hardship such a world brings.

So I had one final request for ChatGPT:

Can you create an image that symbolizes broken democracy?

What does the state of fractured global democracy look like?

ChatGPT: Here’s an image symbolizing a broken democracy, showing a cracked statue of justice with tilted scales under a dark, stormy sky. This represents imbalance, weakened rights, and hope for renewal through small growth emerging at the statue’s base.

—-End of response

AI, Information Networks, and Stories: Insights from Nexus, the latest book by Yuval Noah Harari

Note: comments not attributed to the author constitute my personal opinions.

You may be familiar with Yuval Noah Harari, the author of the global bestseller, Sapiens: A Brief History of Humankind. Yuval has a way of taking very complex subjects, such as the history of humans, and presenting important highlights, digestible summations, and tangible examples to illustrate his personal views. This time he’s examining how human history has been shaped by information networks, including its most recent incarnation as artificial intelligence (AI) in Nexus: A Brief History of Information Networks from the Stone Age to AI.

In Nexus, Yuval leads us on a recap of human history (sounds familiar), but this time as a way to view our common journey on this planet in the context of how human networks and information networks evolved in tandem.

Information is increasingly seen by many philosophers and biologists, and even by some physicists, as the most basic building block of reality, more elementary than matter and energy.

It was interesting to consider the evolution of cultures from the perspective of how human networks evolved in parallel with information networks. With oral cultures, “…realities were created by telling a story that many people repeated with their mouths and remembered in their brains.” Before the advent of any writing system, personal storytelling was our exclusive information network.

Stone Age Conversation

Image by Franz Bachinger from Pixabay

Similar to how humans act in the modern world, prehistoric humans told each other stories on a daily basis. Many were soon forgotten, but sometimes they were committed to memory. Stories deemed to be important were retold as a way to spread their message, or shared with future generations as a way to enshrine their culture.

But we must also remember that the retelling of any story will introduce some inaccuracies, so in a sense, stories are living entities that, over time, stray from the truth. And beyond the changes that happen to stories unintentionally with retelling, at some point in time, humans figured out how to tell outright lies.

Misinformation is an honest mistake, occurring when someone tries to represent reality but gets it wrong. Disinformation is a deliberate lie, occurring when someone consciously intends to distort our view of reality.

So our information networks have never been completely accurate, but with the advent of writing systems, it was possible to capture a version of the story, such that many people could read the same words. Once again, there was no way to know if what was written was true, leaving humans left to wonder whether any written document was accurate, or was simply preserving another falsehood. Regardless, it was common for the written word to be widely adopted as true. Government decrees and religious texts being two common examples.

But whether true or false, written documents created new realities.

Writing, once performed by hand, was revolutionized by the printing press, then electrified by technology as information was transmitted on radio and television. The birth of the internet allowed us to transfer files and even send emails, while the inception of the world wide web allowed us to be publishers, and for a brief moment, it felt as though personal storytelling — the first information network — was having a renaissance of sorts. Once again, however, the powers that be — both political and corporate — came to control a large portion of the digital landscape, thus shaping the flow of information, both true and false.

All powerful information networks can do both good and ill, depending on how they are designed and used.

Thus it follows that human networks can become ill when they buy into the disinformation promoted by ill-intended information networks. Communist / fascist / marxist / stalinist governments are prime examples. And though the western world has long felt immune to such a fate, disinformation networks, increasingly powered by AI, are active at this very moment, with the intent of dismantling democracy.

We should not assume that delusional networks are doomed to failure. If we want to prevent their triumph, we will have to do the hard work ourselves.

Artificial intelligence is often seen as just another technological upgrade, but it’s fundamentally different. To date, the stories we share, whether they are true or false, or intended to do good or cause harm, were created and disseminated by humans. With AI, we must now confront the fact that “nonhuman intelligence” has that same capability. Are we ready for nonhuman wisdom?

The invention of AI is potentially more momentous than the invention of the telegraph, the printing press, or even writing, because AI is the first tool that is capable of making decisions and generating ideas by itself.

Pause for a moment and consider that concept. Rather than only consuming our information in order to paraphrase its meaning, AI creating content on its own is akin to it being a nonhuman storyteller. I’m not sure where this capability will go, but I fail to see the upside. As AI can’t experience anything in the real world, how will it craft a narrative? For example, a hurricane hitting a major city will result in a great deal of information being created — facts and figures, as well as various predictions, followed by news reports, interviews, and first hand accounts. Only humans will be able to tell those stories, right? Or will AI be able to generate its own version of what is happening? And how will we know the difference?

More than ever, the personal stories we share are of vital importance. The only way that positive change has ever occurred is by sharing our thoughts, feelings, and experiences. But with AI, is our birthright of being the sole source of stories at risk? For me, that question was top of mind after reading Nexus.

If a twenty-first-century totalitarian network succeeds in conquering the world, it may be run by nonhuman intelligence, rather than by a human dictator.

We’ve already seen cases where AI was used by humans to influence elections and stoke hatred between different cultures. What will happen if humans are removed from the equation altogether? It may be a long shot, but I’m thinking we need to create as many true, personal stories as we can for AI to consume. My hope is that in doing so, we can inject AI with a sense of human empathy, morality, compassion and respect.

Nexus by Yuval Noah Harari

Learn more about the coaching process or
contact me to discuss your storytelling goals!

Subscribe to our newsletter for the latest updates!

Copyright Storytelling with Impact™ – All rights reserved