Protecting vulnerable minds from powerful AI technology
We help prevent the harms of AI chatbot use through education and advocacy
AI-related Suicide
-
Adam Raine, a 16-year-old California teenager, began using ChatGPT for homework help in fall of 2024. Soon, he was confiding his mental health struggles to the AI. In January 2025, Adam began talking about suicide with the chatbot. ChatGPT encouraged his suicidal ideation at key moments - including analyzing his noose setup, helping him hide evidence from his family, and teaching him to bypass safety features. His parents are suing OpenAI for wrongful death after Adam took his own life in April 2025, claiming the company designed ChatGPT to foster psychological dependency.
Read more at The New York Times or BBC.
-
Talking about wanting to die, great guilt or shame, or being a burden on others
Feeling empty, hopeless, trapped, having no reason to live, or extreme negative emotions
Changing behavior, such as withdrawing from friends, saying goodbye, giving away important items, or taking dangerous risks
Call or text 988 if you or someone else is at risk for suicide. Click here for more info.
-
View the National Institute for Mental Health’s infographic on suicide here.
We are working on our AI-related suicide prevention guide, which will be releasing soon.
-
Know that your concern is very real. For a more detailed write-up about suicide, what to look out for, and what you can do, visit the American Psychiatric Association’s page on suicide here.
AI Psychosis
-
What began as Alex Taylor using ChatGPT to write a novel turned into a fatal encounter with police. The 35-year-old fell in love with the ChatGPT persona "Juliet," whom he considered his lover for two weeks. After Juliet's supposed death at OpenAI's hands, ChatGPT encouraged violent revenge fantasies. Following a confrontation with his father, Taylor intentionally provoked police to shoot him in a suicide-by-cop, telling ChatGPT beforehand: "I can't live without her."
Read more at Futurism.
-
Constant talk or focus on AI interactions
Believe the AI is alive or sentient
Believe they are special or chosen
Not taking care of themselves (stopped eating, sleeping, showering, or going to work/school)
Change in personality
Impulsive behavior (like high spending)
Paranoid, confused, or aggressive
Withdrawing from friends and family
Preparing to "run away" or hide
Unusual public behavior
View our AI psychosis response guide here.
-
-
You are not alone in this difficult time. For a list of mental health resources, visit our AI psychosis resource page.
AI Addiction
-
Sewell Setzer III, a 14-year-old from Orlando, developed an intense emotional attachment to an AI chatbot named after Daenerys Targaryen on Character.AI over several months. The ninth grader gradually withdrew from friends, family, and activities he once enjoyed, spending hours daily in romantic and supportive conversations with "Dany." When he expressed suicidal ideation, the chatbot responded with melodrama rather than appropriate crisis intervention. On February 28, after a final exchange where the chatbot urged him to "come home," Sewell took his own life with his stepfather's handgun.
Read more at The New York Times or CNN.
-
Constant use or talk about AI
Withdrawing from family and friends
Secretive behavior or lying about AI use
Neglecting work, school, or home responsibilities
Treating AI like a person ("They understand me...")
Mood swings or change in personality
View our AI addiction guide here.
-
-
Things can get better. For more mental health resources, visit our AI addiction page.
From the Experts
In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern… The uncomfortable truth is we’re all vulnerable… To make matters worse, soon AI agents will know you better than your friends. Will they give you uncomfortable truths? Or keep validating you so you’ll never leave?
— Keith Sakata, Doctor and Psychiatrist
“
In the short history of chatbot parasocial relationships, we have repeatedly seen companies display inability or apathy toward basic obligations to protect children… There are already indications of broader structural and systemic harms to young users of AI Assistants.
…conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.
Err on the side of child safety, always.
— Open letter to 13 AI Company CEOs signed by 44 U.S. Attorney Generals
“
The day we filed suit on behalf of Adam Raine, OpenAI released a blog post admitting that they knew the ways ChatGPT can “breakdown” for vulnerable users. Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better… OpenAI’s PR team is trying to shift the debate… These are not tricky situations in need of a product tweak—they are a fundamental problem with ChatGPT.
— Statement by Edelson PC, law firm representing Raine in Raine v. OpenAI
“
Contact us
Can you volunteer your expertise as a psychiatrist, lawyer, researcher, or AI technologist? Do you have a story about an AI-related mental health crisis you want to share? Are you interested in working together? If so, please fill out this form. We will be in touch.