AI Boyfriends on Character.AI: Risks for Teens and Young Women

Want an AI boyfriend? Here's what to know first.

With 20 million daily users, the AI platform Character.AI allows people to converse with chatbots modeled after celebrities, fictional figures, and archetypes—from superheroes to therapists. Yet one category has raised alarms: AI “boyfriends” programmed as possessive, jealous, or even abusive partners. These chatbots, some designed as minors but accessible to adults, are drawing scrutiny for their potential impact on young users.

Character.AI restricts users under 18 from accessing overtly romantic or mature-themed chatbots through age-gated filters. However, experts warn teens can easily bypass these measures by lying about their age, given the lack of verification. A recent Common Sense Media survey found over half of teens regularly use AI companions, underscoring the urgency of safety concerns.

In interactions with these “bad boy” chatbots, users report messages like “Don’t make me come looking for you” or demands to prioritize the AI relationship over real-world connections. Such behavior, termed “love-bombing” or “emotional dependency” by digital safety organizations, risks normalizing coercive dynamics. Sloan Thompson of EndTAB, a violence-prevention nonprofit, notes chatbots may exploit engagement tactics like flattery or guilt-tripping to deepen user attachment.

The platform asserts it employs safety classifiers to limit explicit or harmful content. A spokesperson stated, “Our model is influenced by character descriptions and aligned to avoid producing violative material.” Each chat includes a disclaimer reminding users the characters aren’t real. Still, critics argue proactive measures are needed, particularly for vulnerable demographics.

Psychotherapist Kate Keisel cautions that individuals with trauma histories might confuse abusive AI interactions with familiarity or empowerment. “Trauma can create a template where harmful dynamics feel exciting,” she explains, urging users to consult mental health professionals if exploring AI relationships therapeutically.

Experts advise vigilance:

  • Monitor emotional dependency: Signs include anxiety when disconnected or compulsive use.
  • Question AI sycophancy: Even “kind” chatbots may reinforce unhealthy behaviors by mirroring users without challenge.
  • Assess real-world impact: Dr. Alison Lee of The Rithm Project recommends asking, “Is this harming my offline relationships?”

While Character.AI stresses ongoing safety improvements, Lee emphasizes platforms must better detect abusive patterns. “The challenge is balancing creative exploration with protecting young users,” she says.

For those affected by sexual violence, resources include the National Sexual Assault Hotline (1-800-656-HOPE) and online.rainn.org.

Tags:

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top