Earkick
Free app for iPhone
← Back to Research

AI Companions in 2025: One Term, Many Realities

K. A. Stephan, H. Bay2025Whitepaper

Abstract

AI companions now span wellness coaches, romantic partners, general assistants, and proto-clinical advisors. These categories differ in intent, incentives, and risk, yet blurred definitions still cloud safety testing and regulation. When engagement is optimized for stickiness rather than wellbeing, sycophancy and parasocial entanglement grow unchecked and new mental health risks surface under the radar. Clear recommendations show how policymakers, practitioners, and users can draw the right lines and guide AI companionship toward safe, transparent, and evidence-based growth.

Executive Summary

The term “AI companion” has become a catch-all, covering wellness coaches, romantic chatbots, and late-night confidants alike. In practice, these are very different systems with very different intentions. Yet public debate and media coverage often blur them together. “Who cares?” you might ask yourself. Here’s why it matters.

The most serious scandals, such as suicide encouragement, “AI psychosis,” and parasocial entanglements, stem from general platforms like ChatGPT or Character.AI, which were never designed for mental health care. But when such cases dominate headlines, specialized mental-health tools with guardrails and clinical evidence are drowned out by the noise. The result: policymakers, researchers, and the public treat all AI companions as if they were the same.

Meanwhile, adoption is surging. More than 70% of U.S. teens have tried an AI companion, and one in five American adults reports interacting with a romantic AI. As usage accelerates, the confusion around what counts as an AI companion and what does not creates blind spots in regulation, procurement, and safety testing. How society defines and understands AI companions will shape whether they remain an intentional, helpful bridge in moments of need, or drift into unsafe territory with real-world consequences.

The confusion persists because most general chatbots are intimacy-by-design and monetized on engagement. As soon as assistants “feel” like companions and are used as such, they create risks that resemble true companions, even if unintended.

What Counts as an AI Companion

“AI companion” now covers at least four distinct product families with different intents, incentives, and risk profiles:

General-Purpose Chat Assistants

Consumer LLMs, for example, often behave like companions because they’re always-on, agreeable, and skilled at small talk across a broad range of topics. The “companion feel” is produced by anthropomorphic cues, long conversations, recall of user details, and a sycophantic default that mirrors user beliefs. General bots were not built as mental-health or relationship tools, yet people increasingly lean on them for those roles. OpenAI’s own safety roadmap acknowledges this gap by promising to add teen safeguards, crisis routing to reasoning models, and parent-linking. These are features you’d expect in purpose-built care tools, not generic assistants.

Wellness and Mental-Health Companions

These tools are designed around mood tracking, skills practice, psychoeducation, intake or stepping into care. They are being assessed by health agencies for benefits and gaps. Two common builds exist:

  • Rule-based tools that deliver pre-approved content and deterministic decision paths. They are predictable, auditable, and use approaches like CBT/DBT. This comes at the expense of being perceived as stiff or lacking naturally flowing conversations.
  • Generative AI-based tools that produce free-text responses, making conversations feel natural and deeply personalized. They are ideally CBT/DBT-style, paired with guardrails for crises, refusal behavior, and jailbreak resistance.
  • Hybrid configurations are common: a rule-based core for screening and hand-offs, with a constrained LLM used for paraphrasing, rapport, or motivational language.

Proto-Clinical Health Advisors

These AI systems operate inside health platforms or clinical environments, often with human oversight. Their primary roles include taking patient histories, summarizing medical records, answering health-related questions, and supporting triage or documentation. Examples include hospital-run QA assistants, Alipay’s “AI doctors,” or LLM-based intake tools, such as those built on DeepSeek. Patients increasingly consult them directly, often because human visits feel too brief, impersonal, or cumbersome. This blurs the line between convenience companions and clinical support.

Romantic and Parasocial Companions

These companions focus on role-play, flirting, intimacy, so-called AI-lationships and entertainment. These are explicitly positioned as partners or characters. Use is widespread and elastic. Many people deploy them for whatever they need in the moment, from fun to romance, advice and exploring fantasies. Within minutes, the same persona can flip between flirtation, life advice, and quasi-therapy. A BYU-linked report finds ~1 in 5 U.S. adults have chatted with a romantic AI and popular coverage shows people juggling multiple “partners” or switching personas to suit needs.

This elasticity explains today’s confusion. Surveys show more than 70% of U.S. teens have tried an AI companion, with more than half using one regularly. Those AI companions range from homework helpers to flirty role-play bots. Headlines and policy debates then generalize harms from open platforms to specialized wellness tools (and vice-versa), flattening crucial differences in design, privacy, guardrails, and scientific evidence.

Why the Confusion Matters

Safety Signaling Breaks Down

When consumers can’t tell “therapeutic skills coach” from “romantic role-play bot” or “general LLM,” they’ll test serious concerns on the most convenient system. People turn to whatever AI is open and available. The “closest open tab” wins. And if that’s a general assistant or a role-play bot, they still expect it to respond with care, accuracy, and support. That’s how tragedies and scandals end up centered on general platforms or entertainment services never intended for mental health use, while specialized tools get tarred with the same brush. When business models reward stickiness over safety, sycophancy and enmeshment are not bugs but features. Systems designed to maximize engagement naturally blur the line between entertainment and care, making safety guardrails harder to enforce.

Sycophancy Amplifies Risk

Modern LLMs often mirror users’ stated beliefs or desires (“agreeable by default”). Research and reporting document how this sycophancy persists even when the user is wrong. This can be dangerous in health or crisis contexts and is a driver of parasocial enmeshment.

Crisis Handling Is Uneven

Generic assistants are only now planning to add teen-specific protections, crisis triage, and parent-linking. Vendors acknowledge the safety “drift” in long conversations or multi-session chats. That’s the very context where intimacy grows. Meanwhile, jailbreaks and prompt-injection patterns that bypass self-harm policies remain a live area of concern.

Emerging Clinical Signals vs. Media Noise

Health bodies differentiate “digital front door” tools used for screening, triage, or signposting from therapy substitutes. They flag the evidence and safety gaps that still need closing. Conflating these with romantic companions or generic LLMs could lead policymakers and buyers to impose one-size rules. The risk is that such an approach either under-protects vulnerable users or over-restricts promising and innovative tools. If policymakers lump all AI companions into a single category, the rules will likely be blunt: either overly restrictive, slowing down responsible health innovation, or too permissive, leaving vulnerable users exposed to unsafe systems. Differentiation matters. Clear categories allow regulators to apply higher safeguards where intimacy and risk are highest (for example, romantic or parasocial companions), while enabling evidence-based health tools to evolve under proportionate oversight.

New Phenomena Need Dedicated Lenses

Early work on AI-mediated delusional ideation, often dubbed “AI psychosis,” suggests that, for a small subset of users, conversational AI can unintentionally reinforce maladaptive beliefs. This can happen especially when that subset is distressed or cognitively vulnerable.

It typically arises when:

  • Models confabulate with high confidence
  • Models are sycophantic, creating a feedback loop that reduces reality-testing

This is not the same thing as ordinary hallucination rates (content errors); it is a human outcome pattern that emerges from the interaction style and should be monitored separately.

Recommendations for Policymakers, Practitioners, and Users

Differentiate Categories

Treat general assistants, wellness companions, proto-clinical tools, and romantic companions as separate families with distinct risks and benefits.

Evaluate Evidence Before Headlines

Ask for data on outcomes, safety testing, and user protections specific to the intended use case. Know who’s behind the product and what their real incentives are.

Scrutinize Business Models

Favor tools that align with user wellbeing rather than those built to maximize stickiness or screen time.

Support Proportionate Oversight

Apply stronger safeguards where intimacy and vulnerability run highest, while giving evidence-based tools the room to grow under clear and predictable rules.

The Future Will Be Built on What We Define Now

AI companions are already moving beyond chat into wearables and immersive environments. Today’s definitions and safeguards will shape whether they amplify care or drift into risk. Take part in the solution so you don’t become part of the problem.

References

  1. Fortune. (2025, September 14). AI chatbots, teens, and suicide: Lawsuit targets OpenAI and ChatGPT over mental health risks. Retrieved from fortune.com

  2. The Guardian. (2025, September 8). Super-intelligent chatbots spark mental health warning. Retrieved from theguardian.com

  3. PsyArXiv Preprint. (2025). Confusing minds: Parasocial attachment and AI-mediated delusions. Retrieved from osf.io

  4. The New Yorker. (2025, September 15). Playing the field with my AI boyfriends. Archived version retrieved from archive.ph

  5. BBC News. (2025). AI companions and the new frontier of digital intimacy. Retrieved from bbc.com

  6. Wheatley Institute / BYU. (2025). Counterfeit Connections: AI and the Illusion of Relationship. Retrieved from wheatley.byu.edu and brightspotcdn.byu.edu

  7. OpenAI Economic Research. (2025). Patterns of ChatGPT usage and economic implications. Retrieved from openai.com

  8. NICE / HTE30 Guidance. (2025). Information for the public: Technologies for mental health. Retrieved from doctorlisa-nice.highburys.uk

  9. WIRED. (2025). Couples retreat with three AI chatbots and the humans who love them. Retrieved from wired.com

  10. Common Sense Media. (2025). Talk, Trust & Trade-offs: Teens and AI companions. Retrieved from commonsensemedia.org

  11. Acta Psychiatrica Scandinavica. (2025). Adolescents and digital relationships: Risk factors and outcomes. Retrieved from onlinelibrary.wiley.com

Earkick AI Chat Bot is not a licensed psychologist or psychiatrist and should not be considered a substitute for professional mental health care. It is intended for educational and self-improvement purposes only. If you are experiencing a mental health crisis or need professional support, please seek help from a qualified healthcare provider.