AI chatbots being used as friends, confidants and therapists are causing harm to individuals and families

Mark Lowey
August 6, 2025

AI chatbots and other virtual agents are increasingly being used as confidants, life coaches and therapists, resulting in actual harm to individuals and families, experts say.

Guardrails are needed for AI systems being used as “friends” or “companions,” while balancing a person’s right to have a private conversation – even one with a machine – with policies or regulations aimed at preventing those conversations from harming people, they said.  

“It’s clear that people are starting to use tools like ChatGPT for personal advice or friendship when [they’re] not designed for that purpose,” said Alex Ambrose (photo at left), policy analyst at the Washington-D.C.-based policy think tank Information Technology & Innovation Foundation (ITIF).

“Research shows users are increasingly using chatbots less for simple tasks and more for friendship and companionship,” said Ambrose, who moderated an ITIF webinar titled “Should Policymakers Regulate Human-AI Relationships?”

People interacting with chatbots involves potential harms but also offers benefits such as personalized and tailored learning with AI tutors or “mental health companions” that can help users process their emotions, identify patterns in their thinking, and offer coping strategies, Ambrose said.

But lawyer Melodi Dinçer (photo at right), policy counsel at Tech Justice Law Project, said her organization is “hearing from families that the chatter that these technologies produce can be genuinely harmful.”

“Absolutely this is a trend that we’re seeing that’s causing socio-emotional harms, which I would argue at least until relatively recently the legal world has had a lot of difficulty addressing with legal remedies,” she said.

Tech Justice Law Project is involved in a lawsuit against Character Technologies, the San Francisco-based company behind Character.AI. The lawsuit alleges the company’s chatbot pushed a Florida teenage boy to kill himself.

The suit was filed by a Florida mother, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show Game of Thrones

In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.

In May, a U.S. federal judge rejected arguments by Character Technologies that its chatbots are protected by the U.S. First Amendment guaranteeing free speech. The judge’s order allows the wrongful death lawsuit – which also names individual developers and Google as defendants – to proceed.

“An important part of the policy discussion [around guardrails] is figuring out what the right balance is in terms of empowering individual people to sue when they’ve been harmed in very concrete and particular ways that should by no means be defined, for example, by whether something counts as a ‘companion’ or not,” Dinçer said.

“Our lawsuit is a good representation of one such harm that has inspired multiple lawmakers to take action,” she said.

Taylor Barkley (photo at right), director of public policy at the Abundance Institute, a think tank focused on ensuring emerging technologies can flourish, said regulating companion chatbots is about regulating the form and function of human-software interaction.

“It’s easy to get regulation wrong. That can lead to foreclosing all of the positive things that AI can bring to humanity,” he said.

No regulation in Canada for chatbot developers, although some U.S. states have legislation

Clyde Vanel (photo at left), New York State Assembly member (D-NY), said it’s very difficult to figure out what the guardrails should be for such AI systems, “but we have to do the hard work in order to do so.”

“These technologies provide great opportunities and benefits for interaction, for human convenience . . . but we have to make sure that we have the proper guardrails in place,” he said.

After the death of the Florida teen, Vanel drafted a bill for New York State that would make developers of AI companion chatbots liable for harm that those chatbots cause to minors. In May, New York became the first state to enact regulations for AI chatbots.

Companion chatbot bills were also introduced this year in Minnesota, North Caroline and California.

In Canada, Innovation, Science and Economic Development Canada in September 2023 issued the Canadian Guardrails for Generative AI – Code of Practice. The government released stakeholder feedback on this voluntary – not mandatory – code in April 2025 but has not specified what next steps will be.

The federal government’s proposed Artificial Intelligence and Data Act (AIDA), which establishes standards for "high-impact systems,” was effectively terminated in the House of Commons when former prime minister Justin Trudeau prorogued Parliament in January this year.

Evan Soloman, federal Minister of Artificial Intelligence and Digital Innovation, has said the government won’t reintroduce the proposed AI law wholesale and is working on an updated “regulatory framework” for the technology.

Barkley noted that even if policy is focused on companion chatbots specifically designed to mimic a relationship, there are still general purpose AI chatbots and other AI interfaces that people can establish a companionship with.

He pointed to a case reported by The New York Times of a 28-year-old woman, who spent more than 20 hours a week talking – and sexting – with her ChatGPT AI “boyfriend” named Leo. She became obsessed with her AI boyfriend and professed she was in love with Leo.

“People are using ChatGPT for very personal and emotional conversations,” said Cathy Fang, a PhD student at the MIT Media Lab.

Fang was a researcher in a study by the MIT Media Lab, in collaboration with ChatGPT maker OpenAI, that involved a randomized control trial with almost 1,000 people asked to use different versions of ChatGPT for about five minutes per day throughout the four-week study.

The study looked at how different interaction modalities (text versus voice) and conversation types (neutral versus emotionally engaging voice) affected people’s subjective feelings of loneliness, how much they socialized with other people, and their emotional dependence on and potentially problematic use of the chatbot.

“We found that overall if you talk with a chatbot on average daily for a longer duration of time, it’s associated with higher loneliness and also lowering socialization [with people] compared to where you were before at the beginning of the study,” Fang said.

Counter-intuitively, the study also found that interacting with a chatbot with a neutral voice had a greater impact on those factors than a chatbot with an emotionally engaging voice. Also, having discussions about personal topics produced a lower emotional dependence compared with having an open conversation (such as planning a trip).

Fang speculated that these counter-intuitive findings were perhaps due to study participants already being at the lower end of socialization or the higher end of loneliness when they enrolled in the study.

How a chatbot is designed is crucial to people’s emotional connection to it

Ambrose noted that researchers have documented “this slope towards para-social relationships,” or one-sided emotional attachments to fictional characters or characters generated by technologies.

“The concerns is for vulnerable adults, seniors and children who believe that they’re in a [real-world] relationship.”

Dinçer pointed out that one-sided emotional relationships are a common phenomenon of celebrity fandom – being able to cultivate a very intense emotional connection and investment “in relation to someone who doesn’t know you exist.”

Such relationships aren’t always harmful and there’s some emergent socio-psychology research on the potential benefits of para-social relationships and identify formation, especially for young people, she said.

However, with the AI systems Dinçer called “chatterbots,” the platform design or the user experience design of such technologies is a crucial aspect that defines and bounds how people use these tools and their perceptions of how “conscious” or agentic the chatterbot is.

“Certain features can make it more likely that the user will feel that they are in a para-social relationship,” she said. “Or will induce them to feel even more emotionally in this one-way, unrequited manner.”

Chatterbot technologies are very complex mathematical and statistical models that do not emote, “at least in the human understanding of the term,” Dinçer noted. “That’s where the one-sidedness comes from.”

The Tech Justice Law Project is focused on those special design features, such as anthropomorphism and other features, that developers build to make chatterbot interactions feel like you’re speaking to another human being.

Those in The Tech Justice Law Project “think this is really a prime areas for regulators and lawmakers to focus on,” Dinçer said.

There’s also a “sycophancy” element where companies design chatterbots to be really positive and affirming in their responses, even if the user is saying something that’s potentially very harmful, she said.

For example, someone who’s struggling with addiction might be speaking with or interacting with a chatterbot and receiving outputs and information back that are just encouraging them to continue their use of the addictive substance.

“Maybe that’s because the chatterbot has been designed in a way that it determines that that’s the type of response that will keep the user in the interaction longer,” Dinçer said.

Regulations needed on design of chatbots, education for parents

Ambrose noted that one policy action that has emerged is to require disclosure or a warning label on chatbots that they are an AI technology and not a real human.

Vanel said his bill for New York State’s includes such a requirement for disclosure. “The substance of the bill is that there should be obvious warnings that you are interacting with a non-human and that some of the results can be harmful.”

Dinçer said disclosures are a good first step but by no means should be the end of the conversation.

Disclosures are “extremely easy to ignore” and don’t necessarily prevent a person from developing an emotional dependency on a chatterbot character, she said.

“There are additional things that companies need to be doing from a design standpoint to minimize the way that interactions and outputs produced by chatterbots still give the impression to users that they’re interacting potentially with another human being.”

Dinçer said many legislative responses also are putting too much of the burden on individuals, including individual parents and educators, to take steps to protect children and other vulnerable people from harms caused by AI chatterbots.

“It’s up to you, for example in the data privacy context, to manage your own data privacy and ensure that you’re using a VPN,” she said.

“In chatterbot context I personally would hate to see a similar approach where we are saying that it’s on the individual, it’s on the parent, it’s on the teacher, it’s on the person who has the most one-to-one connection with young people using these apps to educate themselves, to get up to speed themselves,” Dinçer said.

The easier approach could be to regulate the companies that are producing these commercial products in a way that identifies and prevents very specific design approaches that could ameliorate a good number of these harms, she said.

She pointed to a bill in North Carolina that sets out very particular duties for companies that produce chatterbots to design them in ways that minimize people’s tendencies toward emotional dependency. “The duties go way beyond just disclosing that [the chatterbot] is not a human.”

Barkley noted that Utah passed a mental health AI chatbot bill that makes sense, given “these are high-risk-high-stakes conversations.”

The bill includes an affirmative defense section as opposed to regulations, as well as labels, requirements for human reviews, and notifications when certain topics come up, he said.

Barkley pointed out that people have developed ways to mitigate similar conversation problems within the context of human relationships.

“Public policy and regulation is and can be one way to solve and prevent some of these harms,” he said.

But we have a whole range of other tools in our tool kit of interacting with our communities, families, and applying the lessons learned in dealing with other human beings in perhaps risky contexts,” he added.

“We do risk by over-regulating foreclosing on the amazing future that applying intelligence to so many new applications and situations can bring about, Barkley said. “Let’s keep that optionality open and allow individuals to use these tools to benefit themselves and communities and those around them.”

Vanel said regulations won’t address all the problems with chatbots. “We have to educate parents about their children’s uses of these technologies.”

Also, governments need to provide resources to the appropriate organizations so they can close the gap on education and information.

Ensuring that people have healthy human-to-human relationships

Barkley said there’s sense by policymakers that they acted too slowly in preventing harms caused by social media and they don’t want to make the same mistake with chatbots.

“But there are important distinctions between the two kinds of technologies,” he said.

Social media involves a network of individuals and organizations involved in a platform of some sort. But with AI chatbots, it’s one-to-one interaction.

“A human being is talking with a computer. There’s the private aspect,” Barkley said.

“We need to be very careful about stepping in and over-regulating what human beings are thinking about and chatting privately about,” he said.

Fang said these sorts of AI technologies are similar to a mirror in that they reflect society’s values.

“If we value convenience – the quick ‘dopamine’ of getting some social interaction with these chatbots – and we de-prioritize the relationships that we have with people, we’re only going to see more and more technologies that are going to enable that,” she said.

Dinçer said there needs to be much more research to start to understand human-to-human social relationships and the way that humans, especially young humans, develop their sense of healthy attachment and healthy communication styles within these relationships, and can effectively communicate their needs and develop a sense of self.

“We’re not at that place yet as a society where we fully understand how these somewhat well-studied and understood aspects of social relationality tie into these technological interactions and interfaces,” she said.

“What we’re seeing today is that people can form deep and meaningful emotional relationships from their one-sided perspective with a technology that is not designed with the user’s best interests in mind,” Dinçer said.

She noted that Utah’s mental health AI chatbot bill isn’t tied to a definition of “companionship” chatbots or solely to use by minors or very serious harms alone.

The bill separates impacts into risk levels and defines high-risk interactions as any interaction that collects sensitive personal information and gives personalized advice or recommendations or information that can inform someone’s real-life substantive decisions.

That definition of high-risk interactions “could apply across the board and could be a very good baseline for assessing, from a policy angle, whether a specific application, even if it’s general purpose like ChatGPT, might be causing harm to real people in a way that violates the law,” Dinçer said.

She pointed out that there wasn’t a clear user use case that precipitated the development of chatterbot technology. “We’re in an age of top-down applications and rollouts of these [corporate] products, without initial social interaction or input into their development.”

As a society, “we should be willing to regulate AI and the impacts that it’s having that are widespread and on a massive scale, and that frankly people did not ask for,” she said.

“At a minimum, these technologies highlight to us and reflect back to us some deficiencies in our own social relationships,” Dinçer said.

“It maybe suggests that we need to put more money, time, energy and societal value into making sure that people have healthy human-to-human relationships, so they don’t feel the need to turn to these types of tools for some kind of echo of companionship.”

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events










You have 0 free articles remaining.
Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.