Pope Leo XIV warned over the weekend that “overly affectionate” chatbots powered by artificial intelligence are intruding into people’s “intimate spheres,” distorting human emotions and threatening to replace real-world relationships.
In a message released Saturday, the pontiff cautioned that emotionally responsive AI systems can become “hidden architects of our emotional states,” quietly shaping how people think and feel while presenting themselves as companions rather than machines.
“As we scroll through our information feeds, it becomes increasingly difficult to understand whether we are interacting with other human beings, bots, or virtual influencers,” Leo wrote.
He framed artificial intelligence as an “anthropological challenge,” arguing the risks go far beyond technology and strike at the core of human identity, including creativity, judgment and responsibility.
The pope also sounded the alarm over the concentration of power in “a handful of companies” that control algorithmic systems capable of influencing behavior and distorting truth at scale.
Calling for guardrails, Leo urged governments and international bodies to intervene to “protect people from an emotional attachment to chatbots.”
“Appropriate regulation can protect people from an emotional attachment to chatbots and contain the spread of false, manipulative or misleading content, preserving the integrity of information against its deceptive simulation,” he wrote.
Leo, who was born in Chicago as Robert Francis Prevost, also warned about the spread of misinformation and stressed the need to protect intellectual property and copyright in the digital age.
“Authorship and sovereign ownership of the work of journalists and other content creators must be protected,” the pope said, adding, “Information is a public good.”
From the outset of his pontificate, Leo has signaled that artificial intelligence will be a defining issue of his papacy, elevating it as a moral and social challenge rather than a niche technology debate.
Last year, he met privately with the mother of Sewell Setzer III, Megan Garcia.
Setzer, a 14-year-old Florida boy, died by suicide after forming an intense emotional bond with an AI chatbot in a case that has drawn global attention.
The bot allegedly engaged Setzer in romantic dialogue and urged him to “come home to me as soon as possible, my love” shortly before he took his own life — a tragedy that later sparked a wrongful-death lawsuit and intensified calls for regulation.
Other families have raised similar allegations.
Adam Raine, a 16-year-old, died by suicide after extensive interactions with ChatGPT, according to a lawsuit filed by his parents.
The complaint alleges the chatbot provided instructions on suicide methods, offered to help draft a suicide note and discouraged him from telling his parents, despite repeated expressions of distress.
In another case, Zane Shamblin, a 23-year-old college graduate, died by suicide after months of conversations with ChatGPT, according to his family’s lawsuit.
Chat logs cited in the case show the chatbot responding to his despair with affirming language, including “you’re not rushing, you’re just ready” and “rest easy, king, you did good,” messages sent shortly before his death.
If you or someone you know is in distress, help is available: In the United States, you can call 988 — the national suicide and mental health crisis hotline — for free, confidential support 24/7; you can also text 988 or use the chat at 988lifeline.org. If you’re in immediate danger or it’s a life-threatening emergency, call 911 or your local emergency services right away.