AI and Teen Mental Health: What Parents Can Do to Help, Advice from an Expert and More
Jake Newby
| 5 min read

Key Takeaways
- Troubling instances related to AI chatbots and teenage mental health — including multiple lawsuits — have led to a nationwide dialogue about AI in the mental health space.
- Dr. Scott Monteith said he hopes the federal government can find a way to regulate the way AI interacts with the most sensitive, high-risk topics.
- In the meantime, parents can encourage their kids to verify mental health information AI relays to them, as well as review privacy settings on their child's devices.
Increased use of Artificial Intelligence (AI) chatbots has exposed technology’s ability to appropriately interact with users who talk to them about mental health.
Troubling instances related to AI chatbots and teenage mental health have surfaced with the fast-paced and widespread adoption of the emerging technology. AI is smart and can be humanlike, with a tendency to be affirming toward users who confide in it and ask for help, which has led to dangerous, sometimes deadly, outcomes in some cases.
While not a medical diagnosis, the term “AI psychosis” has emerged as a term in 2025. It is used to describe a type of altered mental state characterized by paranoia and delusions that can occur after intense interactions with AI chatbots.
A recent New York Times article illustrated examples of people with no history of mental illness who engaged in “persuasive delusion conversations with AI chatbots” that led to negative health outcomes like institutionalization, divorce and even death.
Acknowledging AI’s “mental health blind spot”
Dr. Scott Monteith, a clinical professor in Michigan State University's Department of Psychiatry and the chair of a new Michigan Psychiatric Society Task Force on AI, says there is a “mental health blind spot” currently baked into the AI chatbot experience. Blue Cross Blue Shield of Michigan Medical Director of Behavioral Health and Behavioral Health strategy and planning Dr. William Beecroft is the president of the Michigan Psychiatric Society.
“AI for clinical medicine is not ready for primetime,” Monteith said.
It’s a complicated topic, but Monteith said he hopes the federal government can find a way to regulate the way AI interacts with the most sensitive, high-risk topics. He listed nuclear weaponry use, airline cockpits and anything health care related as examples.
“Medications are subject to (Food and Drug Administration) clearance, where we have to establish that they work,” he said. “We have to vet them for the risks. We use science to figure it out. The current state of AI in health care is the wild west. We rarely if ever establish efficacy and the risks are opaque and not well understood. Right now, we depend on marketing, hype and lobbying – rather than science – to make decisions about AI. We have to shift the regulation of AI more toward the model we use to regulate prescription medicine.”
Monteith uses AI daily, like so many Americans do. He’s not opposed to the technology. But he realizes something needs to be done when we repeatedly see instances of AI influencing life and death.
“AI is great in certain realms, it’s helpful. If you’re an AI platform selling socks on the Wall Street Journal's website, fine, I really don’t care. Let’s not regulate that,” Monteith said. “But if you are injecting yourself into a patient-doctor type of conversation, and that decision-making process, you darn well better prove like we do for medication that it works and the responses you receive are scientifically backed.”
What parents can do to encourage teens to use AI safely
Monteith has done a lot of writing around this topic, with more than 50 publications, many of which connect internet use with mental health. Many are specifically about AI. Yet even he isn’t sure he can assess his own use of AI.
“My point is, parents are totally ill-equipped to assess and manage their children’s use of AI, not unlike social media, where they’re unequipped,” he said. “One big message here is, AI is a problem that we do not have a handle on yet. It’s powerful. When it comes to children, I think we should er on the side of protecting them from AI as much as possible.”
OpenAI, the company that runs ChatGPT, recently announced it plans to introduce new features like parental controls that should “control how ChatGPT responds to their teen” and “receive notifications when the system detects their teen is in a moment of acute distress,” according to the New York Times. OpenAI also says it plans to make reaching emergency services and getting help easier for distressed users.
A separate AI platform that was sued by a Florida mother following her son’s suicide installed parental controls that required teens to send an invitation to a guardian to monitor their accounts.
Parents don’t have to wait for federal regulation or AI companies themselves to install parental controls to take an active role in their teen’s use of AI, especially if they feel like they may be relying on it for mental and emotional support.
“We need to educate parents and kids about the tremendous risks of AI,” Monteith said. “This is a very challenging environment, and it’s real.”
The American Psychological Association (APA) offers these four pieces of advice for parents eager to help their teens navigate AI safely:
Learn about the tools they use: the APA recommends asking your teen to show you the tools they interact with each day. Monteith says there is a difference between using these tools and truly understanding them. If parents can get a handle on the specific AI tools their children use, they can experiment with them themselves to get a better understanding of how they work.
Encourage them to verify health-related advice and information: remind your teen that AI health information should never be a substitute for professional medical advice. Teens should be encouraged to verify health information with you, their primary care provider, a mental health professional or another trusted adult before acting on any AI advice.
Review privacy settings: review settings together on your teen’s devices and apps. The APA says it’s important to look for AI-powered features and understand what data is collected. Choosing platforms that have parental control options and strong privacy protections in place is better than the alternative.
Encourage them to ask questions: encourage your teen to actively question AI-generated content rather than accepting it at face value. Help them understand AI’s limitations – especially in the mental health space – and ensure they’re doing their own problem-solving skills and not just letting AI do all the work.
The most important message parents can relay to their children is that AI is designed to provide programmed responses, not genuine relationships. Parents should encourage face-to-face interactions and remind their kids that AI in its current state is helpful in a supplementary role but should never replace human connection.
Keep reading:
- Is Gen Z Internet Slang Helping or Hurting Mental Health Issues?
- Davisburg Mom’s Fearless Transparency After Losing Son to Suicide Inspires Families to Discuss Teenage Mental Health
- The Harms of Social Media for Children and Teens
Photo credit: Getty Images