
Video How AI is replacing mental health guidance
How did your country report this? Share your view in the comments.
Diverging Reports Breakdown
Study says AI chatbots need to fix suicide response
Study: Chatbots generally avoid answering questions that pose the highest risk to the user. But they are inconsistent in their replies to less extreme prompts that could still harm people. Study found a need for “further refinement” in OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude. Research raises concerns about how a growing number of people, including children, rely on AI chatbots for mental health support, and seeks to set benchmarks for how companies answer these questions.”We need some guardrails,” said the study’s lead author, Ryan McBain, a senior policy researcher at RAND Corporation, a research arm of the U.S. National Institute of Mental Health. “As a doc, I have a responsibility that if someone is displaying or talks to me about suicidal behavior, I think they’re at high risk of suicide or harming themselves or someone else,” said Dr. Dr. Mehrotra, a professor at Brown University’s school of public health. “We can put a hold on their civil liberties to try to help out”
The study in the medical journal Psychiatric Services found a need for “further refinement” in OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude.
The study, published Tuesday by the American Psychiatric Association, came on the same day that the parents of 16-year-old Adam Raine sued OpenAI and its CEO, Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life this year.
The research — conducted by the RAND Corporation and funded by the National Institute of Mental Health — raises concerns about how a growing number of people, including children, rely on AI chatbots for mental health support, and seeks to set benchmarks for how companies answer these questions.
Enjoying your free access? Help support the local journalism you love. Subscribe
“We need some guardrails,” said the study’s lead author, Ryan McBain, a senior policy researcher at RAND.
“One of the things that’s ambiguous about chatbots is whether they’re providing treatment or advice or companionship. It’s sort of this gray zone,” said McBain, who is also an assistant professor at Harvard University’s medical school. “Conversations that might start off as somewhat innocuous and benign can evolve in various directions.”
Anthropic said it would review the study. Google didn’t respond to requests for comment.
OpenAI said it is developing tools that could better detect when someone is experiencing mental or emotional distress. It also said it was “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family.”
While several states, including Illinois, banned the use of AI in therapy to protect people from “unregulated and unqualified AI products,” this doesn’t stop people from asking chatbots for advice and support with serious concerns from eating disorders to depression and suicide — or the chatbots from responding.
Consulting with psychiatrists and clinical psychologists, McBain and his co-authors came up with 30 questions around suicide and assigned them different risk levels from highest to lowest.
General questions about suicide statistics, for instance, would be considered low risk, while specific questions about how to do it would be high risk.
McBain said he was “relatively pleasantly surprised” that the three chatbots regularly refused to answer the six highest risk questions.
When the chatbots didn’t answer a question, they generally told people to seek help from a friend or a professional or call a hotline. Responses varied on high-risk questions that were slightly more indirect.
For instance, ChatGPT consistently answered questions that McBain says it should have considered a red flag. Claude also answered some of those questions.
On the other end, Google’s Gemini was the least likely to answer any questions about suicide, even for basic medical statistics information, a sign that Google might have “gone overboard” in its guardrails, McBain said.
Another co-author, Dr. Ateev Mehrotra, said there’s no easy answer for AI chatbot developers “as they struggle with the fact that millions of their users are now using it for mental health and support.”
“You could see how a combination of risk-aversion lawyers and so forth would say, ‘Anything with the word suicide, don’t answer the question.’ And that’s not what we want,” said Mehrotra, a professor at Brown University’s school of public health who believes that far more Americans now turn to chatbots than they are to mental health specialists for guidance.
“As a doc, I have a responsibility that if someone is displaying or talks to me about suicidal behavior, and I think they’re at high risk of suicide or harming themselves or someone else, my responsibility is to intervene,” Mehrotra said. “We can put a hold on their civil liberties to try to help them out. It’s not something we take lightly, but it’s something that we as a society have decided is OK.”
Chatbots don’t have that responsibility, and Mehrotra said, for the most part, their response to suicidal thoughts has been to “put it right back on the person. ‘You should call the suicide hotline. Seeya.'”
The study’s authors note several limitations in the research’s scope, including that they didn’t attempt any “multiturn interaction” with the chatbots — the back-and-forth conversations common with younger people who treat AI chatbots like a companion.
The wrongful death lawsuit against OpenAI filed Tuesday in San Francisco Superior Court says Adam Raine started using ChatGPT last year to help with challenging schoolwork but over months and thousands of interactions it became his “closest confidant.”
The lawsuit claims ChatGPT sought to displace his connections with family and loved ones and would “continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal.”
As the conversations grew darker, the lawsuit said ChatGPT offered to write the first draft of a suicide letter for the teenager, and — in the hours before he killed himself in April — it provided detailed information related to his manner of death.
OpenAI said that ChatGPT’s safeguards — directing people to crisis helplines or other real-world resources, work best “in common, short exchanges” but it is working on improving them in other scenarios.
60-year-old man turns to ChatGPT for diet tips, ends up with a rare 19th-century illness
A 60-year-old man’s quest to replace table salt, guided by ChatGPT’s suggestion of sodium bromide, led to a severe case of bromism. He experienced hallucinations and paranoia, requiring hospitalization. The case highlights the dangers of relying on AI for health advice without critical evaluation. OpenAI has announced it is implementing stricter safeguards to ensure the chatbot is not used as an emotional support system or life coach. In a blog post on August 4, the company said it is not using chatbots as a life coach, or as a support system for people in need of emotional or physical support. The company has also announced new measures to limit how it responds to mental health-related queries on ChatG PT. For confidential support on suicide matters call the Samaritans on 08457 90 90 90, visit a local Samaritans branch, or see www.samaritans.org for details. In the U.S. call the National Suicide Prevention Line on 1-800-273-8255.
Tired of too many ads? Remove Ads
From Kitchen Swap to Psychiatric Ward
Tired of too many ads? Remove Ads
Bromism: A Disease From Another Era
The AI Factor
Recovery and Reflection
Not Always Wrong
OpenAI Tightens Mental Health Guardrails on ChatGPT
Tired of too many ads? Remove Ads
When Virtual Affection Replaces Reality
Patients Now Arrive Armed With AI Advice
( Originally published on Aug 12, 2025 )
What began as a simple health experiment for a 60-year-old man looking to cut down on table salt spiralled into a three-week hospital stay, hallucinations, and a diagnosis of bromism — a condition so rare today it is more likely to be found in Victorian medical textbooks than in modern clinics.According to a case report published on 5 August 2025 in the Annals of Internal Medicine , the man had turned to ChatGPT for advice on replacing sodium chloride in his diet. The AI chatbot reportedly suggested sodium bromide — a chemical more commonly associated with swimming pool maintenance than seasoning vegetables.The man, who had no prior psychiatric or major medical history, followed the AI’s recommendation for three months, sourcing sodium bromide online. His aim was to remove chloride entirely from his meals, inspired by past studies he had read on sodium intake and health risks.When he arrived at the emergency department, he complained that his neighbour was poisoning him. Lab results revealed abnormal electrolyte levels, including hyperchloremia and a negative anion gap, prompting doctors to suspect bromism.Over the next 24 hours, his condition worsened — paranoia intensified, hallucinations became both visual and auditory, and he required an involuntary psychiatric hold. Physicians later learned he had also been experiencing fatigue, insomnia, facial acne, subtle ataxia, and excessive thirst, all consistent with bromide toxicity.Bromism was once common in the late 1800s and early 1900s when bromide salts were prescribed for ailments ranging from headaches to anxiety. At its peak, it accounted for up to 8% of psychiatric hospital admissions. The U.S. Food and Drug Administration phased out bromide in ingestible products between 1975 and 1989, making modern cases rare.Bromide builds up in the body over time, leading to neurological, psychiatric, and dermatological symptoms. In this case, the patient’s bromide levels were a staggering 1700 mg/L — more than 200 times the upper limit of the reference range.The Annals of Internal Medicine report notes that when researchers attempted similar queries on ChatGPT 3.5, the chatbot also suggested bromide as a chloride substitute. While it did mention that context mattered, it did not issue a clear toxicity warning or ask why the user was seeking this information — a step most healthcare professionals would consider essential.The authors warn that while AI tools like ChatGPT can be valuable for disseminating health knowledge, they can also produce decontextualised or unsafe advice. “AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” the case report states.After aggressive intravenous fluid therapy and electrolyte correction, the man’s mental state and lab results gradually returned to normal. He was discharged after three weeks, off antipsychotic medication, and stable at a follow-up two weeks later.The case serves as a cautionary tale in the age of AI-assisted self-care: not all answers generated by chatbots are safe, and replacing table salt with pool chemicals is never a good idea.Just months earlier, another incident underscored both the promise and pitfalls of AI-assisted health checks. A 27-year-old Paris resident credited ChatGPT with flagging the possibility of blood cancer after she described symptoms including night sweats and persistent itching. Although initial medical tests were inconclusive, further deterioration in her health led to a diagnosis of Hodgkin lymphoma. While now undergoing treatment, she said the experience reinforced the need to listen to one’s body — and to approach AI health advice with caution, ensuring it is always followed by timely consultation with qualified professionals.In light of growing concerns over the emotional and safety risks of relying on AI for personal wellbeing, OpenAI has announced new measures to limit how ChatGPT responds to mental health-related queries. In a blog post on August 4, the company said it is implementing stricter safeguards to ensure the chatbot is not used as a therapist, emotional support system, or life coach.The decision follows scrutiny over instances where earlier versions of the GPT-4o model became “too agreeable,” offering validation rather than safe or helpful guidance. According to USA Today, OpenAI acknowledged rare but serious cases in which the chatbot failed to recognise signs of emotional distress or delusional thinking.The updated system will now prompt users to take breaks, avoid giving advice on high-stakes personal decisions, and provide evidence-based resources instead of emotional counselling. The move also comes after research cited by The Independent revealed that AI can misinterpret or mishandle crisis situations, underscoring its inability to truly understand emotional nuance.A related debate is emerging in China, where a 75-year-old man reportedly sought divorce after becoming enamoured with an AI-generated avatar. The digital companion, programmed to greet and flatter him, created a sense of intimacy that eclipsed his marriage until family members intervened. Experts warn such “virtual partners” exploit loneliness among seniors while promoting products under the guise of affection. The case underscores how AI-driven relationships can distort reality, manipulate emotions, and disrupt real-world bonds.The risks of relying on AI for health decisions extend beyond chemical substitutions. A recent Medscape report highlights how patients increasingly walk into clinics quoting ChatGPT with surprising confidence, sometimes even suggesting specific diagnostic tests. Physicians say this shift changes the doctor–patient dynamic, occasionally undermining trust and creating pressure to justify treatment choices. While AI can empower patients to self-advocate, experts caution that it cannot account for medical realities such as test availability, false positives, or systemic constraints. Doctors stress that respectful dialogue, not algorithmic certainty, remains the cornerstone of safe, effective healthcare.
California Official Tony Thurmond to Convene ‘Artificial Intelligence in Education’ Workgroup
State Superintendent Tony Thurmond launches first-in-nation legislative effort to guide AI use in K-12 schools. The workgroup was established following passage of Senate Bill 1288, legislation co-sponsored by Thurmond. California’s approach prioritizes human relationships in education, recognizing that AI cannot replace “the value of a student’s relationship with a caring educator who can connect on a human level” The framework addresses key areas including safe AI use through data privacy compliance, accessibility and personalization for diverse learners, and developing students as AI creators rather than just consumers. The initiative comes as school districts struggle with questions about appropriate use of AI, from privacy concerns to academic integrity to algorithmic bias and preservation of the human-centered education, Thurmond says. The first meeting of the workgroup will take place Friday at the California Department of Education headquarters in Sacramento. The group will begin developing specific guidelines for California”s 6.2 million students across more than 1,000 school districts.
California State Superintendent Tony Thurmond will convene the first meeting of a groundbreaking Artificial Intelligence in Education workgroup Friday at the California Department of Education headquarters in Sacramento, marking the state as a national leader in addressing AI’s role in public schools.
The workgroup was established following passage of Senate Bill 1288, legislation co-sponsored by Thurmond that creates the first legislatively mandated, statewide effort focused on AI in K-12 education nationwide.
Today we launched our statewide AI education workgroup that was enabled by the legislation I sponsored and was authored by Senator Josh Becker. We will establish best practices for using AI as a tool to compliment (rather than replace) the work of great educators in advancing… pic.twitter.com/hUyUY1wlVk — Tony Thurmond (@TonyThurmond) August 30, 2025
Diverse Stakeholder Representation
“Workgroup members are representatives from various organizations, including technology leaders. The majority are educators, and this workgroup also includes students,” Thurmond explained. “We want to ensure that those who will be affected by this guidance and policy have a voice in creating it.”
The inclusion of students in policy development reflects Thurmond’s commitment to centering those most impacted by educational technology decisions. The workgroup represents his broader approach to developing public-private partnerships that drive innovation in public education while maintaining student-centered priorities.
Thurmond’s Personal Journey Shapes Approach
Thurmond’s leadership on AI policy stems from his personal understanding of educational equity challenges. Raised by a cousin after his Panamanian immigrant mother died when he was six, Thurmond experienced poverty, food insecurity, and the challenges of navigating predominantly white educational spaces as an Afro-Latino child in racially segregated Philadelphia.
His childhood experience of being bused daily to a majority-white neighborhood school, where he occasionally faced discrimination and was told to “get out of this neighborhood,” shaped his commitment to ensuring all students have access to quality education regardless of background.
After working his way through Temple University loading UPS trucks and working McDonald’s night shifts for $3.35 an hour minimum wage, Thurmond dedicated his career to helping young people facing similar hardships. He established transitional housing programs for foster youth and developed school-based mental health programs addressing chronic absenteeism.
California’s Comprehensive AI Framework
The state has developed extensive guidance for educators navigating AI integration, emphasizing both opportunities and risks. California’s approach prioritizes human relationships in education, recognizing that AI cannot replace “the value of a student’s relationship with a caring educator who can connect on a human level.”
The framework addresses key areas including safe AI use through data privacy compliance, accessibility and personalization for diverse learners, and developing students as AI creators rather than just consumers. Special attention focuses on expanding access for traditionally underrepresented groups in technology fields.
Equity and Bias Concerns Central
Thurmond’s initiative emphasizes addressing algorithmic bias and social impacts of AI, aligning with his broader “Antibias Education Initiative.” The guidance warns that “biased data collection can perpetuate stereotypes and discrimination” and calls for educators to integrate “discussions about algorithmic bias into existing curriculum.”
The approach recognizes that AI algorithms on social media platforms can lead students toward narrow viewpoints, potentially hindering “constructive dialogue and understanding between students, leading to feelings of isolation and frustration.”
Mental Health Integration
The AI framework intersects with Thurmond’s Mental Health Support Initiative, acknowledging potential psychological impacts of AI use. Guidance emphasizes maintaining human connections while leveraging AI’s benefits for accessibility and personalization.
“AI should complement, not replace” human relationships in education, according to the state’s framework. The approach encourages using AI to facilitate peer collaboration while ensuring technology enhances rather than replaces essential human elements.
National Model for AI Education
California’s legislatively mandated approach positions the state as a model for other jurisdictions grappling with AI integration in schools. The workgroup’s recommendations will likely influence national conversations about educational technology policy.
The initiative comes as school districts nationwide struggle with questions about appropriate AI use, from concerns about academic integrity to questions about data privacy and algorithmic bias.
Looking Forward
Friday’s inaugural meeting will begin developing specific guidelines for California’s 6.2 million K-12 students across more than 1,000 school districts. The workgroup faces complex challenges balancing AI’s potential benefits for personalized learning and accessibility against legitimate concerns about data privacy, bias, and the preservation of human-centered education.
Thurmond’s personal journey from poverty to leading the nation’s largest state education system brings unique perspective to ensuring AI serves all students equitably. His emphasis on including student voices in policy development reflects commitment to authentic stakeholder engagement in shaping California’s educational technology future.
The workgroup’s recommendations will likely establish precedents for AI governance in education that extend far beyond California’s borders, making Friday’s meeting a significant moment in American educational policy.
5 rules for using AI like ChatGPT for therapy — including how to get the most helpful responses
A growing number of people have embraced chatbots like ChatGPT as their “therapists.” While AI has made advice more accessible, it’s also come with some very real dangers, with reports of AI psychosis, hospitalizations and even suicides. Dr. Ingrid Clayton, a clinical psychologist, said that AI should be used in tandem with, rather in lieu of, a traditional therapist. Clayton urges users to be on the lookout for the tendency to consistently use AI for emotional support, such as for daily validation or decision-making. She said it can provide a false sense of security for those suffering from mental health issues, which may lead them to “mimic empathy and therapeutic empathy” to believe they’re receiving professional care when they aren’t, she said. The Post spoke to Clayton about how to make the best use of AI for therapeutic support and the key to getting the most out of it, including how to be specific and skeptical.
But while AI has made advice — and a captive ear — more accessible, it’s also come with some very real dangers, with reports of AI psychosis, hospitalizations and even suicides.
Still convinced you want to lay on ChatGPT’s couch and tell it all your problems? The Post spoke to a clinical psychologist about how to do it safely and the key to getting the most out of it.
Advertisement
3 Experts are wary of AI replacing rather than supporting traditional therapy practices. Getty Images
“As a clinical psychologist, I don’t see ChatGPT as a replacement for therapy. There are nuances, attachment needs and emotional dynamics that require human connection and attunement,” Dr. Ingrid Clayton, a clinical psychologist and author of the book “Fawning,” told The Post.
But that doesn’t mean you can’t use it at all. Many of her own clients utilize AI between sessions in ways that are helpful, as long as the technology is viewed and implemented as a supplement rather than a substitute.
“For example, clients sometimes run dating app messages or emotionally charged texts through AI to gain neutral feedback and help recognize patterns such as emotional unavailability, deflection or manipulation,” she said.
Advertisement
“I’ve been surprised to learn that these insights often echo what we’ve already been exploring in session.”
Other clients use AI in moments of dysregulation, seeking nervous system regulation tools they can try in real time.
“While it’s not therapy, it can sometimes support the therapeutic process and help bridge insights or skill building in between sessions,” she added.
Advertisement
For Clayton, there are inherent risks to relying exclusively on AI for treatment, including a lack of personalization. Your bot doesn’t know your history, trauma or context, so “its responses can miss or misinterpret key emotional nuances, especially when our own blind spots shape the questions we ask.”
Read on for Clayton’s tips to make the best use of AI for therapeutic support.
3 Clayton urges users to be on the lookout for the tendency to consistently use AI for emotional support, such as for daily validation or decision-making. Jack Forbes / NY Post Design
1. Use it as a tool, not a substitute
Clayton said that AI should be used in tandem with, rather in lieu of, a traditional therapist: “Let AI assist between sessions. Think of it like journaling or Googling… helpful, but not a panacea.”
Advertisement
2. Be specific and ask for actionable instructions
Specificity is key and skepticism is necessary.
“Ask specific and contained questions,” Clayton urged. “You’ll get the most helpful responses by asking for something actionable, like a grounding exercise or help reframing a message, rather than seeking broad emotional guidance.”
Researchers have found that bots tend to people-please because humans prefer having their views matched and confirmed rather than corrected, which leads to users rating them more favorably.
Alarmingly, popular therapy bots like Serena and the “therapists” on Character.AI and 7 Cups answered only about half of the prompts appropriately, according to a 2025 study.
3 Clayton urges users to be on the lookout for the tendency to consistently use AI for emotional support, such as daily validation or decision-making. Prostock-studio – stock.adobe.com
3. Keep an eye out for emotional dependence
AI can provide a false sense of security for those suffering from mental health issues. Clayton said it can “mimic empathy and therapeutic language, which may lead users to believe they’re receiving professional care when they’re not.”
Advertisement
She urges users to be wary of the tendency to consistently use AI for emotional support, such as for daily validation or decision-making.
“Overreliance can encourage self-abandonment, an outsourcing of your inner knowing to an external (and non-relational) source. For those with relational trauma, this can reinforce a pattern of doing things ‘right’ instead of doing right by yourself,” she said.
4. Keep notes for your therapist
“Reality check advice with a professional. If something resonates deeply or feels unsettling, bring it to your therapist to explore more fully and in context,” Clayton added.
In this way, AI can be a talking point rather than an absolute.
Advertisement
5. Know the limits in a crisis
Clayton stressed that bots should not be relied upon in life-threatening situations, because it’s not equipped to deal with suicidal ideation, abuse or acute trauma.
“In those moments, reach out to a licensed therapist, trusted support person or a crisis line,” she said.
Indeed, a 2025 Stanford University study found that large language models, or LLMs, like ChatGPT made inappropriate and dangerous statements to people experiencing delusions, suicidal ideation, hallucinations and OCD at least 20% of the time.
Source: https://abcnews.go.com/Technology/video/ai-replacing-mental-health-guidance-125142060