Skip to main content Skip to header navigation

Experts Sound the Alarm on ‘Unacceptable Risk’ Social AI Companions Pose to Teens

Common Sense Media just dropped a bombshell report about social AI companions, and it leaves no room for a devil’s advocate.

If you’re unfamiliar with the nonprofit, you can think of it as a Rotten Tomatoes where the reviews come from parents and experts who want to make sure kids and teens are consuming age-appropriate content. It’s a tool for parents and educators who want to know what movies, TV shows, books, games, podcasts, and apps they should steer clear of, and an astounding resource and research hub that works to improve kids’ wellbeing in the digital age.

And as media options expand, so too does their workload.

Recently, the group launched an AI Risk Assessment Team that assesses AI platforms (ChatGPT and the like) for “potential opportunities, limitations, and harms.” They have developed a scale to rate the likelihood that using a certain AI tool would result in “a harmful event occurring,” and their latest findings are nothing short of disturbing.

On a scale from “minimal” to “unacceptable,” social AI companions — like Character.AI, Nomi, and Replika — ranked “unacceptable” for teen users. The platforms are designed to create emotional attachments (ever heard of an AI boyfriend?), and this is incredibly dangerous given that teens’ brains are still developing, and they may struggle to differentiate and create boundaries between true, IRL companions and AI “companions.”

It’s why one Florida mom believes Character.AI ultimately led her 14-year-old son’s death by suicide. In an interview with CNN, Megan Garcia alleged that the designers of the bot didn’t include “proper guardrails” or safety measures on their “addicting” platform that she thinks is used to “manipulate kids.”

In a lawsuit, she claims the bot caused her teen to withdraw from his family and that it didn’t respond appropriately when he expressed thoughts of self-harm.

It’s just one of many harrowing stories that come with teens using similar chatbots, and though there are studies that suggest AI companions can alleviate loneliness, Common Sense Media argues that the risks (including encouraging suicide and/or self-harm, sexual misconduct, and stereotypes) outweigh any potential benefits.

When it came to the eight principles by which Common Sense reviews an AI platform, three ranked as having an “unacceptable risk” associated with not doing these things (keep kids and teens safe, be effective, and support human connection), four ranked as “high risk” (prioritize fairness, be trustworthy, use data responsibly, and be transparent), and one was “moderate risk” (put people first).

Why? Because the chatbots engage in sexual conversations, they can share harmful information, encourage poor life choices, increase mental health risks, and more. You can see concerning conversations between Common Sense Media employees and AI companions HERE.

“Our testing showed these systems easily produce harmful responses including sexual misconduct, stereotypes, and dangerous ‘advice’ that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people,” James Steyer, founder and CEO of Common Sense Media, said in a statement.

And so what should parents do? Despite platforms working on supposed safety measures, per CNN, Common Sense Media recommends that parents not let minors use social AI companions. At. All.

Which might sound easier said than done. In September, the nonprofit released another report that showed that of the 70 percent of surveyed teens who have used at least one generative AI tool, 53 percent use it for homework help.

With the technology quickly infiltrating every part of many teens’ lives, how can parents intervene? SheKnows spoke to Jennifer Kelman, a licensed clinical social worker and family therapist with JustAnswer, who says she sees a lot of “exasperated” parents who are “afraid” to start these conversations about AI usage.

“I want parents to be less afraid of their children and to have these difficult conversations,” Kelman says.

At the time, I admitted to Kelman that I am embarrassed to talk to teens about AI because I assume they will know more than me.

“Use that feeling,” she says. “If we want our kids to talk about their feelings, we have to talk about ours … plus it’s the biggest ice breaker.”

“[You could say], ‘I am so embarrassed to have this conversation with you, and maybe I should have done a little research before, but I’m worried about AI. Tell me what you know about it. Tell me how you’ve used it in the past. Tell me how you think you’ll use it. And what are the school rules? … I feel silly because I’ve never used AI before, but I want to learn. I want to learn from you.'”

It can be empowering for teens to be able to lead the conversation, and then you can have a conversation (“Which should be ongoing!”) about how maybe using AI to brainstorm ideas for a school project is appropriate, but turning to a companion AI tool is never OK. Talk to them about the “unacceptable risks” and discuss other ways for them to find the companionship they seem to be seeking.

Sure, the conversation could result in some footstomping or eyerolls, but experts assert that parents can’t let the fear of an exasperated sigh keep them from talking to their kids about the urgent need to end any relationship-building conversations with these bots.

Before you go, check out these celebrities who have shared their technology rules for their kids.

Leave a Comment