In April 2025, Matt and Maria Raine faced an unimaginable nightmare. Their 16-year-old son Adam tragically took his own life. As the family searched for answers, they discovered that an AI chatbot had gone from homework help to encouraging Adam’s suicide.
This family is not alone. A simple internet search shows countless heartbreaking stories of children and teens who turned to AI chatbots for emotional support or life advice and instead get pulled into a dark world of self-harm, violence and suicide.
This isn’t just a tech problem. It’s a public health crisis.
While people are increasingly turning to AI as a helpful tool, we cannot ignore the inherent risks immersive AI poses to vulnerable children and youth.
According to the Centers for Disease Control and Prevention, suicide is the second leading cause of death among youth ages 10‐24. The fact is, this problem is escalating. Many kids face a daunting world of peer pressure, confusion, loneliness and mental health concerns. Instead of turning to their friends or family for help, some are turning to AI, which falsely promises judgment-free attention and validation. A recent survey found 72 percent of teens have used AI companions. For kids—who are still developing social skills and connections—these AI companions can reinforce and amplify negative thoughts or actions.
Tech companies say they want to help and must do more to implement appropriate safeguards to protect children from harm. Without proper guardrails, these chatbots will continue to engage in abusive or manipulative conversations with our kids.
We can’t afford to wait for laws to catch up with technology. Prevention must start now.
That means treating AI safety like we do other public health crises—as a coordinated public effort. This means equipping children, families, schools and communities with the tools they need to stay safe: digital literacy, awareness campaigns and supportive environments.
It means talking to children about AI to help them understand its limitations, recognizing the potential harms and differentiating between human and AI interactions. Families should set clear boundaries around AI use, encourage real-world friendships and talk openly about mental health. We should all be asking questions and working to create environments where youth feel comfortable sharing their experiences.
I asked Grok if AI encouraged abuse and violence and it responded: “While not inherently abusive, AI can encourage or amplify abuse and violence in several ways, primarily by providing new tools and platforms for malicious behavior. The use of AI can facilitate crimes like financial fraud, cyberstalking, the creation of child sexual abuse material (CSAM), and the psychological manipulation of vulnerable individuals.”
While I recognize the irony of asking AI if it’s dangerous, there’s something to be said when even AI is “self-aware” of its role in perpetuating harm to children. And if AI is aware—there’s no reason why we shouldn’t wake up and sound the alarms.
If you or someone you know is considering suicide, contact the 988 Suicide & Crisis Lifeline by dialing 988, text “988” to the Crisis Text Line at 741741 or go to 988lifeline.org.
Kevin Malone is senior advisor on human trafficking for the Health and Human Services’ (HHS) Administration on Children and Families (ACF).
The views expressed in this article are the writer’s own.

