
AI companions now span wellness coaches, romantic partners, general assistants, and proto-clinical advisors. These categories differ in intent, incentives, and risk, yet blurred definitions still cloud safety testing and regulation. When engagement is optimized for stickiness rather than wellbeing, sycophancy and parasocial entanglement grow unchecked and new mental health risks surface under the radar. Clear recommendations show how policymakers, practitioners, and users can draw the right lines and guide AI companionship toward safe, transparent, and evidence-based growth.
The term âAI companionâ has become a catch-all, covering wellness coaches, romantic chatbots, and late-night confidants alike. In practice, these are very different systems with very different intentions. Yet public debate and media coverage often blur them together. âWho cares?â you might ask yourself. Hereâs why it matters.
The most serious scandals, such as suicide encouragement, âAI psychosis,â and parasocial entanglements, stem from general platforms like ChatGPT or Character.AI, which were never designed for mental health care. But when such cases dominate headlines, specialized mental-health tools with guardrails and clinical evidence are drowned out by the noise. The result: policymakers, researchers, and the public treat all AI companions as if they were the same.
Meanwhile, adoption is surging. More than 70% of U.S. teens have tried an AI companion, and one in five American adults reports interacting with a romantic AI. As usage accelerates, the confusion around what counts as an AI companion and what does not creates blind spots in regulation, procurement, and safety testing. How society defines and understands AI companions will shape whether they remain an intentional, helpful bridge in moments of need, or drift into unsafe territory with real-world consequences.
The confusion persists because most general chatbots are intimacy-by-design and monetized on engagement. As soon as assistants âfeelâ like companions and are used as such, they create risks that resemble true companions, even if unintended.
âAI companionâ now covers at least four distinct product families with different intents, incentives, and risk profiles:
Consumer LLMs, for example, often behave like companions because theyâre always-on, agreeable, and skilled at small talk across a broad range of topics. The âcompanion feelâ is produced by anthropomorphic cues, long conversations, recall of user details, and a sycophantic default that mirrors user beliefs. General bots were not built as mental-health or relationship tools, yet people increasingly lean on them for those roles. OpenAIâs own safety roadmap acknowledges this gap by promising to add teen safeguards, crisis routing to reasoning models, and parent-linking. These are features youâd expect in purpose-built care tools, not generic assistants.
These tools are designed around mood tracking, skills practice, psychoeducation, intake or stepping into care. They are being assessed by health agencies for benefits and gaps. Two common builds exist:
These AI systems operate inside health platforms or clinical environments, often with human oversight. Their primary roles include taking patient histories, summarizing medical records, answering health-related questions, and supporting triage or documentation. Examples include hospital-run QA assistants, Alipayâs âAI doctors,â or LLM-based intake tools, such as those built on DeepSeek. Patients increasingly consult them directly, often because human visits feel too brief, impersonal, or cumbersome. This blurs the line between convenience companions and clinical support.
These companions focus on role-play, flirting, intimacy, so-called AI-lationships and entertainment. These are explicitly positioned as partners or characters. Use is widespread and elastic. Many people deploy them for whatever they need in the moment, from fun to romance, advice and exploring fantasies. Within minutes, the same persona can flip between flirtation, life advice, and quasi-therapy. A BYU-linked report finds ~1 in 5 U.S. adults have chatted with a romantic AI and popular coverage shows people juggling multiple âpartnersâ or switching personas to suit needs.
This elasticity explains todayâs confusion. Surveys show more than 70% of U.S. teens have tried an AI companion, with more than half using one regularly. Those AI companions range from homework helpers to flirty role-play bots. Headlines and policy debates then generalize harms from open platforms to specialized wellness tools (and vice-versa), flattening crucial differences in design, privacy, guardrails, and scientific evidence.
When consumers canât tell âtherapeutic skills coachâ from âromantic role-play botâ or âgeneral LLM,â theyâll test serious concerns on the most convenient system. People turn to whatever AI is open and available. The âclosest open tabâ wins. And if thatâs a general assistant or a role-play bot, they still expect it to respond with care, accuracy, and support. Thatâs how tragedies and scandals end up centered on general platforms or entertainment services never intended for mental health use, while specialized tools get tarred with the same brush. When business models reward stickiness over safety, sycophancy and enmeshment are not bugs but features. Systems designed to maximize engagement naturally blur the line between entertainment and care, making safety guardrails harder to enforce.
Modern LLMs often mirror usersâ stated beliefs or desires (âagreeable by defaultâ). Research and reporting document how this sycophancy persists even when the user is wrong. This can be dangerous in health or crisis contexts and is a driver of parasocial enmeshment.
Generic assistants are only now planning to add teen-specific protections, crisis triage, and parent-linking. Vendors acknowledge the safety âdriftâ in long conversations or multi-session chats. Thatâs the very context where intimacy grows. Meanwhile, jailbreaks and prompt-injection patterns that bypass self-harm policies remain a live area of concern.
Health bodies differentiate âdigital front doorâ tools used for screening, triage, or signposting from therapy substitutes. They flag the evidence and safety gaps that still need closing. Conflating these with romantic companions or generic LLMs could lead policymakers and buyers to impose one-size rules. The risk is that such an approach either under-protects vulnerable users or over-restricts promising and innovative tools. If policymakers lump all AI companions into a single category, the rules will likely be blunt: either overly restrictive, slowing down responsible health innovation, or too permissive, leaving vulnerable users exposed to unsafe systems. Differentiation matters. Clear categories allow regulators to apply higher safeguards where intimacy and risk are highest (for example, romantic or parasocial companions), while enabling evidence-based health tools to evolve under proportionate oversight.
Early work on AI-mediated delusional ideation, often dubbed âAI psychosis,â suggests that, for a small subset of users, conversational AI can unintentionally reinforce maladaptive beliefs. This can happen especially when that subset is distressed or cognitively vulnerable.
It typically arises when:
This is not the same thing as ordinary hallucination rates (content errors); it is a human outcome pattern that emerges from the interaction style and should be monitored separately.
Treat general assistants, wellness companions, proto-clinical tools, and romantic companions as separate families with distinct risks and benefits.
Ask for data on outcomes, safety testing, and user protections specific to the intended use case. Know whoâs behind the product and what their real incentives are.
Favor tools that align with user wellbeing rather than those built to maximize stickiness or screen time.
Apply stronger safeguards where intimacy and vulnerability run highest, while giving evidence-based tools the room to grow under clear and predictable rules.
AI companions are already moving beyond chat into wearables and immersive environments. Todayâs definitions and safeguards will shape whether they amplify care or drift into risk. Take part in the solution so you donât become part of the problem.
Fortune. (2025, September 14). AI chatbots, teens, and suicide: Lawsuit targets OpenAI and ChatGPT over mental health risks. Retrieved from fortune.com
The Guardian. (2025, September 8). Super-intelligent chatbots spark mental health warning. Retrieved from theguardian.com
PsyArXiv Preprint. (2025). Confusing minds: Parasocial attachment and AI-mediated delusions. Retrieved from osf.io
The New Yorker. (2025, September 15). Playing the field with my AI boyfriends. Archived version retrieved from archive.ph
BBC News. (2025). AI companions and the new frontier of digital intimacy. Retrieved from bbc.com
Wheatley Institute / BYU. (2025). Counterfeit Connections: AI and the Illusion of Relationship. Retrieved from wheatley.byu.edu and brightspotcdn.byu.edu
OpenAI Economic Research. (2025). Patterns of ChatGPT usage and economic implications. Retrieved from openai.com
NICE / HTE30 Guidance. (2025). Information for the public: Technologies for mental health. Retrieved from doctorlisa-nice.highburys.uk
WIRED. (2025). Couples retreat with three AI chatbots and the humans who love them. Retrieved from wired.com
Common Sense Media. (2025). Talk, Trust & Trade-offs: Teens and AI companions. Retrieved from commonsensemedia.org
Acta Psychiatrica Scandinavica. (2025). Adolescents and digital relationships: Risk factors and outcomes. Retrieved from onlinelibrary.wiley.com