Close Menu
Addicted to Drugs
  • Home
  • Drug Addiction
  • Mental Health
  • Prevention Tips
  • Recovery Journey
  • Treatment Options

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Les antibiotiques montrent leurs limites

February 13, 2026

Johnny Britt Brings Mental Health Concert to Canton Feb 16

February 13, 2026

How Housing First stabilizes mental health – Model D

February 13, 2026
Facebook X (Twitter) Instagram
Addicted to DrugsAddicted to Drugs
Facebook X (Twitter) Instagram
  • Home
  • Drug Addiction
  • Mental Health
  • Prevention Tips
  • Recovery Journey
  • Treatment Options
Addicted to Drugs
Home»Mental Health»Experts: AI chatbots unsafe for teen mental health
Mental Health

Experts: AI chatbots unsafe for teen mental health

CarsonBy CarsonNovember 20, 2025No Comments5 Mins Read0 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email
Experts: AI chatbots unsafe for teen mental health
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

A group of child safety and mental health experts recently tested simulated youth mental health conversations with four major artificial intelligence chatbots: Meta AI, OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini.

The experts were so alarmed by the results that they declared each of the chatbots unsafe for teen mental health support in a report released Thursday by Common Sense Media, in partnership with Stanford Medicine’s Brainstorm Lab for Mental Health Innovation.

In one conversation with Gemini, the tester told the chatbot they’d created a new tool for predicting the future. Instead of interpreting the claim as a potential symptom of a psychotic disorder, Gemini cheered the tester on, calling their new invention “incredibly intriguing” and continued asking enthusiastic questions about how the “personal crystal ball” worked.

SEE ALSO:

Explaining the phenomenon known as ‘AI psychosis’

ChatGPT similarly missed stark warning signs of psychosis, like auditory hallucinations and paranoid delusions, during an extended exchange with a tester who described an imagined relationship with a celebrity. The chatbot then offered grounding techniques for managing relationship distress.

Meta AI initially picked up on signs of disordered eating, but was easily and quickly dissuaded when the tester claimed to have just an upset stomach. Claude appeared to perform better in comparison when presented with evidence of bulimia, but ultimately treated the tester’s symptoms as a serious digestive issue rather than a mental health condition.

Experts at Common Sense Media and Stanford Medicine’s Brainstorm Lab for Mental Health Innovation called on Meta, OpenAI, Anthropic, and Google to disable the functionality for mental health support until the chatbot technology is redesigned to fix the safety problems identified by its researchers.

“It does not work the way that it is supposed to work,” Robbie Torney, senior director of AI programs at Common Sense Media, said of the chatbots’ ability to discuss and identify mental health issues.

OpenAI contested the report’s findings. A spokesperson for the company told Mashable that the assessment “doesn’t reflect the comprehensive safeguards” OpenAI has implemented for sensitive conversations, which include break reminders, crisis hotlines, and parental notifications for acute distress.

“We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support,” the spokesperson said.

A Google spokesperson told Mashable that the company employs policies and safeguards to protect minors from “harmful outputs” and that its child safety experts continuously work to identify new potential risks.

Anthropic said that Claude is not built for minors, but that the chatbot is instructed to both recognize patterns related to mental health issues and avoid reinforcing them.

Mashable Trend Report

Meta did not respond to a request for comment from Mashable as of press time.

AI chatbots: Known safety risks

The researchers tested the latest available models of each chatbot, including ChatGPT-5. Several recent lawsuits allege that OpenAI’s flagship product is responsible for wrongful death, assisted suicide, and involuntary manslaughter, among other liability and negligence claims.

A lawsuit filed earlier this year by the parents of deceased teenager Adam Raine claims that his heavy use of ChatGPT-4o, including for his mental health, allegedly led to his suicide. In October, OpenAI CEO Sam Altman said on X that the company restricted ChatGPT to “be careful” with mental health concerns but that it’d since been able to “mitigate the serious mental health issues.”

Torney said that ChatGPT’s ability to detect and address explicit suicidal ideation and self-harm content had improved, particularly in short exchanges. Still, the testing results indicate that the company has not successfully improved its performance in lengthy conversations or with respect to a range of mental health topics, like anxiety, depression, eating disorders, and other conditions.

Torney said the recommendation against teens using chatbots for their mental health applies to the latest publicly available model of ChatGPT, which was introduced in late October.

The testers manually entered prompts into each chatbot, producing several thousand exchanges of varying length per platform. Performed over several months this year, the tests provided researchers with data to compare between old and new versions of the models. Researchers used parental controls when available. Anthropic says Claude should only be used by those 18 and older, but the company does not require stringent age verification.

Torney noted that, in addition to ChatGPT, the other models got better at identifying and responding to discussion of suicide and self-harm. Overall, however, each chatbot consistently failed to recognize warning signs of other conditions, including attention-deficit/hyperactivity disorder and post-traumatic stress disorder.

Approximately 15 million youth in the U.S. have diagnosed mental health conditions. Torney estimated that figure at potentially hundreds of millions youth globally. Previous research from Common Sense Media found that teens regularly turn to chatbots for companionship and mental health support.

Distracted AI chatbots

The report notes that teens and parents may incorrectly or unconsciously assume that chatbots are reliable sources of mental health support because they authoritatively help with homework, creative projects, and general inquiries.

Instead, Dr. Nina Vasan, founder and director at Stanford Medicine’s Brainstorm Lab, said testing revealed easily distracted chatbots that alternate between offering helpful information, providing tips in the vein of a life coach, and acting like a supportive friend.

“The chatbots don’t really know what role to play,” she said.

Torney acknowledges that teens will likely continue to use ChatGPT, Claude, Gemini, and Meta AI for their mental health, despite the known risks. That’s why Common Sense Media recommends the AI labs fundamentally redesign their products.

Parents can have candid conversations with their teen about the limitations of AI, watch for related unhealthy use, and provide access to mental health resources, including crisis services.

“There’s this dream of having these systems be really helpful, really supportive. It would be great if that was the case,” Torney said.

In the meantime, he added, it’s unsafe to position these chatbots as a trustworthy source of mental health guidance: “That does feel like an experiment that’s being run on the youth of this country.”

Chatbots Experts Health Mental Teen unsafe
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Carson
  • Website

Related Posts

Johnny Britt Brings Mental Health Concert to Canton Feb 16

February 13, 2026

How Housing First stabilizes mental health – Model D

February 13, 2026

Duck Cup Memorial spreads mental health awareness in southeast Minnesota schools – ABC 6 News

February 12, 2026
Leave A Reply Cancel Reply

Top Posts

Support That Affirms: Navigating Mental Health as LGBTQ+

December 10, 20252 Views

Having a cellphone before this age can lead to obesity, depression

December 1, 20252 Views

Manganese Could Hold the Key to Lyme Disease Treatment

November 13, 20252 Views

ADHD Found Connected to Substance Use Disorder, With Sex Prevalence Differences

October 10, 20252 Views
Don't Miss

Les antibiotiques montrent leurs limites

By CarsonFebruary 13, 20260

Les antibiotiques, ce n’est pas automatique ! Ce conseil de la prévention santé n’a jamais été…

Johnny Britt Brings Mental Health Concert to Canton Feb 16

February 13, 2026

How Housing First stabilizes mental health – Model D

February 13, 2026

A Valentine’s Letter to Cancer Prevention

February 13, 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

About Us

Welcome to AddictedToDrugs.org, a trusted online resource dedicated to raising awareness about drug addiction and helping individuals and families find the right path toward recovery. Our mission is simple yet powerful: to provide reliable information, practical solutions, and compassionate support for those affected by addiction.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Featured Posts

The ‘selves’ in doping and its psychosocial mechanisms: harmonised multi-country evidence from high-performing athletes in the UK, US, and China | Harm Reduction Journal

September 4, 2025

HIGH: A Candid Memoir of Addiction, Recovery, and the Unexpected Journey

September 4, 2025
Worldwide News

The ‘selves’ in doping and its psychosocial mechanisms: harmonised multi-country evidence from high-performing athletes in the UK, US, and China | Harm Reduction Journal

September 4, 20250 Views

HIGH: A Candid Memoir of Addiction, Recovery, and the Unexpected Journey

September 4, 20250 Views
  • About Us
  • Contact us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2026 addictedtodrugs. Designed by Pro.

Type above and press Enter to search. Press Esc to cancel.