Earkick
Free app for iPhone
← Back to Research

New Laws, Same Mission: AI for Mental Health in 2025

K. A. Stephan, H. Bay2025Whitepaper

Abstract

As AI mental health tools become mainstream, states like Illinois and Nevada are introducing laws to define what these tools can and cannot do. This paper breaks down key regulations like HB 1806, explains why they’re emerging now, and shows how privacy-first platforms like Earkick can stay compliant without compromising their mission: accessible, stigma-free support that doesn’t mimic therapy or store user data.

1. Fast-Moving Context

Map/chart showing 160M+ people in mental health shortage areas

AI-powered mental-health tools went from fringe curiosity to everyday companion in under five years. Millions now open an app instead of waiting weeks for a human appointment. Access gaps are the rocket fuel: nearly 160 million people in the U.S. live in a county with a shortage of licensed therapists, and demand keeps climbing.

In 2025 lawmakers began asking a hard question: Who is allowed to sound like a therapist?

Illinois' HB 1806 is the first bill that offers a concrete answer. Legal drafters in Nevada, New York, Utah, and California are following close behind.

This whitepaper explains:

  • What HB 1806 actually says (stripped of legal jargon).

  • Why state-level regulation is accelerating right now.

  • How user-controlled, zero-PII platforms like Earkick can adapt-without compromising our founding mission of free, stigma-free support.

2. Why Regulation Is Emerging Now

The momentum behind new AI mental health laws is a response to real and growing pressure.

Infographic "The Case for AI in Mental Health"

The United States faces a severe shortage of licensed mental health professionals. In many regions, people wait for more than six weeks to get an appointment. Into that gap stepped round-the-clock, low-cost AI companions that feel safe to try because they don't judge or require insurance. These AI-powered self-care tools have rapidly gained traction. A recent YouGov survey found that 55 percent of Americans aged 18 to 29 are comfortable discussing mental health concerns with an AI chatbot they trust.

This surge in use has raised concerns among lawmakers. They are focused on patient safety, responsible data use, the risk of AI systems impersonating professionals, and the lack of clear national standards. With more people turning to digital tools for support, the stakes have become higher.

Lawmakers now worry that the same immediacy and intimacy driving adoption could backfire if tools blur the line between clinical treatment (which requires licensure) and self-help coaching (which does not). Their top concerns: patient safety, data misuse, and AI systems that pretend to be real therapists. Because more citizens rely on these apps every day, mistakes would have bigger consequences.

That urgency shows in a wave of proposed bills: Illinois, Nevada, New York, Utah, and California are all drafting guardrails. Details vary, but each bill tries to protect users without choking off innovation. Regulation is moving fast precisely because adoption is already widespread, and the cost of getting the guardrails wrong is rising just as quickly.

3. Illinois HB 1806 in Plain English

Illinois is the first state to translate those concerns into black-and-white rules. HB 1806 draws a bright line: only licensed humans may perform anything resembling clinical therapy, while AI is welcome to handle purely administrative or behind-the-scenes support. The statute even itemizes what counts as "therapy" (diagnosis, treatment decisions, emotion detection) versus what is simply clerical help.

The table that follows distills HB 1806 into a single glance.

What it Allows vs. Blocks

AI Use CasePermitted?Notes
Appointment scheduling, billing, remindersYESCounts as administrative support.
Drafting session notes, trend analysisYESMust stay non-therapeutic; no emotion detection.
Auto-detecting emotions or anxiety in a therapy sessionNOExplicitly banned under ß20(b)(4).
Chatbot delivering therapeutic advice directly to a userNOOnly humans with state licenses may do so.
AI-generated treatment plan reviewed & signed off by a licensed professionalYESHuman remains fully responsible.
Transcription with AI helpDEPENDSPermitted if written consent obtained up-front.

Table of what the Illinois bill would allow or block

The law imposes penalties of up to $10,000 per violation, enforced by the Illinois Department of Financial & Professional Regulation. It takes effect immediately upon the governor's signature, expected in summer 2025.


4. Similar Laws: Nevada, New York, California

Illinois may be first out of the gate, but two other states have already locked in statutory text, and several more have live bills on the table. Together, they sketch out a regulatory "early wave" that any AI-mental-health product team now has to track.

Nevada AB 406 (signed June 5 ? effective July 1, 2025)

This bill bars any AI system from advertising, offering, or delivering services that constitute professional mental or behavioral healthcare unless a licensed human remains in charge.

  • Clinician guardrails: Even licensed providers must keep AI in a strictly assistive role.
  • Disclosure: Users must be clearly informed when AI is involved and what it is (and is not) doing.
  • Penalty backdrop: Violations can trigger both fines and licensure consequences under Nevada law.

New York AI Companion Safeguards Law (effective Nov 5, 2025)

This bill targets apps that market themselves as conversational friends or confidants. It aims to prevent deception and protect users during moments of emotional vulnerability.

  • Core requirements: Always-on disclosure that the user is interacting with a bot, not a person.

  • Crisis-signal safeguards: Developers must implement features to identify and respond to self-harm cues.

  • Transparent consent: users must affirm understanding of the bot's non-human status before proceeding.

California & Utah Have Bills in Motion

California introduced two bills: one would ban bots from impersonating licensed therapists, the other would bar minors from using AI mental health tools unless strict criteria are met.

Utah's HB 452, passed in May, mandates transparency, impersonation controls, and safety protocols for any AI application used in mental health contexts.


U.S. States with bills in motion

5. What This Means for Health Tech Companies

For many digital mental health startups, the new laws present a mix of uncertainty, added cost, and branding dilemmas. Some companies are now changing their wording entirely, removing terms like "therapist" from their websites and app descriptions to avoid regulatory risk. Others are introducing geofencing or feature toggles, disabling certain functions for users in states like Illinois where legal thresholds are stricter.

The biggest challenge is ambiguity. Many founders are still asking where exactly the line is between self-care and therapy, and legal teams often offer cautious or conflicting answers. As a result, startups are spending more on compliance reviews and state-specific audits. For some, this means slowing down releases or pausing investments in certain features.

There's also a growing concern that innovation could be chilled. Critics argue that the current laws don't distinguish between ethical, transparent tools and those that mislead or overreach. Others warn that overly broad restrictions risk shutting down solutions that are already helping expand access to care. The tension is clear: how to protect users without cutting off safe, responsible tools that millions rely on.

6. Why This Matters for Users

The stakes aren't just legal or commercial but deeply personal. Demand for mental health support is skyrocketing, yet licensed therapists remain in short supply, especially in rural and underserved areas. For many people, waiting weeks for an appointment or navigating insurance hurdles simply isn't an option.

AI-based self-care tools have stepped in to fill some of that void. Millions use apps to track their mood, reflect on their stress patterns, and build daily habits that support emotional wellbeing. These tools aren't trying to replace human therapists, but they do offer something people urgently need: a safe, accessible, and judgment-free starting point.

If regulation becomes too rigid or unclear, the risk is that people stop seeking help or, worse, that they turn to unregulated, opaque, or even predatory alternatives. The challenge for lawmakers is to build safeguards without making support harder to access. And for ethical tech companies, the goal is to keep earning trust while staying well within the guardrails.

7. Why Earkick Welcomes Regulation

Earkick welcomes the new wave of state regulation because clear guardrails protect users, create awareness, and reward companies that speak plainly about what they do. The platform's architecture already aligns with that spirit, so if lawmakers call for new disclosures, wording tweaks, or geo-specific feature adjustments, those updates can ship quickly without disrupting anyone's daily routine.

The reason is simple: Earkick was engineered around zero personally identifiable information. Users never create an account, and every note, mood entry, or voice clip resides solely in their control. They choose to share, unshare, or delete it.

That privacy foundation supports the app's deeper goal of user empowerment. The chatbot offers effortless check-ins, highlights patterns, and gently nudges people towards talking with a trusted friend or licensed professional. Emotion detection is used to prefills mood in the user's journal. Every suggestion is editable and always user-controlled.

Earkick never diagnoses, never prescribes, and never tires of encouraging help-seeking when self-care alone isn't enough.

Everything is wrapped in radical transparency. From the opening screen, clear disclaimers explain that Earkick is a self-care and resilience companion, not a licensed provider or substitute for therapy or medication.

Given that millions face long waits, high costs, or social stigma when they reach for mental health support, Earkick exists to shrink that gap. It offers an immediate, judgment-free starting point while steering users toward human help whenever the situation calls for it.

Welcoming regulation, safeguarding privacy, and empowering users are not separate strategies. At Earkick, they are woven together as part of one mission: making evidence-based self-care universally accessible while protecting both safety and trust.

8. Looking Ahead: Shared Responsibility, Shared Upside

AI self-care tools have become a vital part of the mental health support system. The latest state laws show that regulators, technologists, clinicians, and users are beginning to align on shared goals. When each group takes responsibility, whether it is setting clear rules, designing with transparency, creating evidence-based content, or using tools mindfully, the potential is enormous: faster access to support, reduced pressure on clinical systems, and better data to guide public health efforts. This is the direction Earkick is moving toward, one where innovation serves safety and privacy stays central to progress.


References

  1. The Commonwealth Fund. Understanding the U.S. Behavioral-Health Workforce Shortage. May 2023.

  2. YouGov America. Can an AI Chatbot Be Your Therapist? January 2024.

  3. Illinois General Assembly. HB 1806 – Wellness and Oversight for Psychological Resources Act. 104th GA, 2025.

  4. Wilson Sonsini Goodrich & Rosati. Nevada Passes Law Limiting AI Use for Mental and Behavioral Healthcare. June 2025.

  5. Wilson Sonsini Goodrich & Rosati. New York Passes Novel Law Requiring Safeguards for AI Companions. June 2025.

  6. Utah Legislature. H.B. 452 — Artificial Intelligence Amendments. March 2025.

  7. Los Angeles Times. California Senate Passes Bill to Make AI Chatbots Safer. June 3, 2025.

  8. Yeung, N. A State Bill Threatens to Ban AI Therapists, Forcing Health-Tech Startups to Pivot. Endpoints News, May 2025.

Earkick AI Therapist is not a licensed psychologist or psychiatrist and should not be considered a substitute for professional mental health care. It is intended for educational and self-improvement purposes only. If you are experiencing a mental health crisis or need professional support, please seek help from a qualified healthcare provider.