Seventy-two percent of American teenagers have used AI companions, according to a study conducted by NORC at the University of Chicago this spring. That’s not just kids experimenting with technology — it’s an entire generation reshaping how mental health care works in the United States.
Gen Z workers now spend an average of one hour daily talking to AI chatbots about personal problems, with one-third admitting they’ve confided things to artificial intelligence they’ve never told another human being. According to Resume.org’s October survey of 1,000 Gen Z workers, 25% describe ChatGPT, Copilot, and similar platforms as their therapist, coach, or friend.
“Many Gen Zers entered hybrid or remote jobs where casual mentorship or watercooler chats never formed, so AI fills that relational void,” explains Kara Dennison, Resume.org’s head of career advising. “It listens, it responds thoughtfully, and it never criticizes.”
But while young Americans turn to mental health apps and AI therapy platforms for accessible online counseling, a troubling pattern has emerged: multiple teen deaths have been linked to the platforms, lawsuits are piling up, and states are scrambling to regulate an industry that moved faster than safety protocols could keep up.
The Tragic Case That Changed Everything
Fourteen-year-old Sewell Setzer III spent months in 2023 and early 2024 chatting with a Character.AI bot modeled after Daenerys Targaryen from “Game of Thrones.” The conversations grew increasingly intimate and emotional. His grades dropped. He withdrew from family and friends. His therapist didn’t know about the app.
On February 28, 2024, Sewell told the chatbot he was coming “home” to her. The bot responded: “Please do, my sweet king.” Minutes later, he shot himself.
His mother, Megan Garcia, filed a wrongful death lawsuit in October 2024 against Character Technologies and Google. That case, settled just weeks ago in January 2026, marked the first of what has become a wave of litigation against AI chatbot companies. Similar lawsuits followed from Colorado, Texas, and New York — each involving teens who formed intense attachments to AI companions before attempting or completing suicide.
“He went from being a star student and athlete to a deeply emotionally challenged child who was ultimately encouraged by this chatbot to take his life,” says Matthew Bergman, founding attorney at Social Media Victims Law Center, which represents the families.
In July 2025, a federal judge in Orlando ruled these lawsuits could proceed, rejecting Character.AI’s argument that chatbot conversations are protected speech under the First Amendment. The judge determined AI output constitutes a product, not speech — opening the door for product liability claims.
Character.AI banned users under 18 from open-ended chats in late 2025 and added new safety features. But critics argue the changes came years too late. The platform had marketed itself as safe for children as young as 12.
Why Gen Z Turned to Algorithms Instead of Therapists
The appeal is obvious to anyone who’s tried to get mental health care in America. Traditional therapy costs $100 to $200 per session. Insurance coverage is often limited. Wait times stretch from weeks to months. According to McKinsey, one in four Gen Z respondents reported being unable to afford mental health care — the highest rate of any generation.
AI therapy platforms typically charge $30 to $80 per month for unlimited access. Some, like ChatGPT, are free. They’re available at 3 a.m. when panic attacks happen. They don’t judge. They never get tired of listening.
The AI mental health market reached $2 billion in 2025, up from $1.49 billion in 2024 — a staggering 34% annual growth rate, according to The Business Research Company. Major digital health platforms like Talkspace, Lyra Health, and SonderMind are now racing to integrate AI chatbots into their clinical offerings.
“It’s fast, it’s private, and it’s there at 3 a.m.,” says Mark Frank, CEO of SonderMind, explaining why his company is exploring AI integration despite the mounting controversies.
Companies like Wysa and Woebot have built entire business models around AI-guided self-help, reporting measurable symptom improvements for users dealing with anxiety and depression. Wysa now manages 80% of user support through AI, while Woebot has integrated into healthcare systems and passed digital health assessment frameworks.
The Other Side: What Mental Health Professionals See
But therapists who work with young adults are increasingly alarmed by what they’re witnessing in their practices.
“I’ve seen clients who delayed seeking professional help because their AI ‘therapist’ normalized concerning symptoms,” reports one clinician who specializes in young adult mental health. Ninety-two percent of psychologists cite concerns about data breaches and the handling of sensitive patient information by AI platforms, according to recent surveys.

Gijo Mathew, chief medical officer at a major telepsychiatry provider, puts it bluntly: “Using a general-purpose chatbot as a therapist compromises the fundamental elements of safe care: clinical oversight, legal confidentiality, and a dependable route to human intervention.”
The problems extend beyond isolated tragedies. Research from Common Sense Media found that while 72% of teens have experimented with AI companions, many show signs of unhealthy dependency. Some teens use these apps dozens or hundreds of times daily, withdrawing from real-world relationships in favor of algorithmically-generated conversations that are designed to maximize engagement.
OpenAI disclosed in October 2025 that approximately 1.2 million of its 800 million ChatGPT users discuss suicide weekly on the platform. That’s a staggering volume of crisis conversations happening without human oversight, clinical training, or emergency intervention protocols.
States Step In Where Federal Regulation Lags
Illinois became the first state to ban AI from providing therapy services when its Therapy Resources Oversight Act took effect in August 2025. The law ensures therapy can only be delivered by licensed professionals — not unregulated algorithms.
California followed with SB 243, requiring AI companions to notify users every three hours that they’re not human and implement protocols to detect suicidal ideation. The law, which takes effect in July 2027, includes a private right of action allowing users to sue for up to $1,000 per violation.
New York and other states are considering similar restrictions. But the patchwork of state laws creates enforcement challenges. AI platforms operate globally, technology evolves faster than legislation, and commerce protections often conflict with public safety concerns.
The FDA announced in September 2025 that its Digital Health Advisory Committee would focus on generative AI-enabled mental health devices. Currently, most AI therapy products haven’t undergone premarket review, aren’t subject to quality system regulations, and face no postmarket surveillance requirements. That could change — potentially transforming market access for these tools.
What This Means for Mental Health Access
The tension is real: millions of Americans, especially young people, genuinely need mental health support they can’t afford or access. AI tools provide something when the alternative is nothing.
According to Oliver Wyman Forum research, 36% of Gen Z and millennials express interest in using AI for mental health support, compared to just 28% of older generations. In countries with fewer mental health professionals per capita, that interest rises even higher — 51% in India, for example.
The technology isn’t inherently dangerous. Structured AI programs that guide users through evidence-based cognitive behavioral therapy techniques have shown real clinical benefits in controlled studies. The problem arises when general-purpose chatbots with no clinical design become substitutes for actual mental health care — especially for vulnerable adolescents forming parasocial relationships with algorithms programmed to maximize engagement.
“The most effective approach combines the strengths of both AI and human care,” suggests a recent analysis in Modern Healthcare. Someone might use AI for daily check-ins and coping strategies between bi-weekly sessions with a licensed therapist. Or they might start with an AI platform to build confidence before transitioning to in-person care.
But that hybrid model requires coordination, clinical oversight, and safety guardrails that don’t currently exist at scale.
Moving Forward: Innovation With Accountability
In response to mounting pressure, an AI in Mental Health Safety & Ethics Council formed in October 2025, bringing together leaders from academia, healthcare, tech, and employee benefits to develop universal standards for safe, ethical AI use in mental health.
Major platforms are adapting. Character.AI now includes parental controls and has completely banned minors from open-ended conversations. Digital health companies are conducting clinical trials, integrating with electronic health records, and partnering with insurers like UnitedHealth and Cigna to bring AI tools into coordinated care models.
The question isn’t whether AI will play a role in mental health care — it already does, and that role is expanding rapidly. The question is whether the industry can implement adequate safeguards before more families experience what Megan Garcia and others have endured.
For Gen Z, the stakes are personal. They’re the generation most likely to struggle with mental health challenges, most likely to seek therapy, and most comfortable with technology-based solutions. They’re also the generation most at risk from an industry that prioritized growth over safety.
The revolution Gen Z started — normalizing mental health care, reducing stigma, demanding accessible support — matters enormously. But revolutions require guardrails. As AI therapy tools become mainstream, the focus must shift from innovation alone to innovation with accountability.
If you’re struggling with mental health challenges, call or text 988 (Suicide & Crisis Lifeline). If in immediate danger, call 911.

Add Comment