Can AI chatbots trigger psychosis in vulnerable people?
NEWNow you can listen to News articles!
AI-enabled chatbots are quickly becoming a part of our daily lives. Many of us turn to them for ideas, advice or conversation. To most, that interaction seems harmless. However, mental health experts now warn that for a small group of vulnerable people, long, emotionally charged conversations with AI can worsen delusions or psychotic symptoms.
Doctors emphasize that this does not mean that chatbots cause psychosis. Instead, there is growing evidence to suggest that AI tools can reinforce distorted beliefs among people who are already at risk. That possibility has prompted new research and clinical warnings from psychiatrists. Some of those concerns have already arisen in lawsuits alleging that chatbot interactions may have contributed to serious harm during emotionally sensitive situations.
Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered right to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM information sheet.
What psychiatrists see in patients using AI chatbots
Psychiatrists describe a repeating pattern. A person shares a belief that does not align with reality. The chatbot accepts that belief and responds as if it were true. Over time, repeated validation can strengthen the belief rather than challenge it.
OPINION: THE DEFICIT OF FAITH IN ARTIFICIAL INTELLIGENCE SHOULD ALARM ALL AMERICANS

Mental health experts warn that emotionally intense conversations with AI chatbots can reinforce delusions in vulnerable users, although the technology does not cause psychosis. (Philip Dulian/Picture Alliance via Getty Images)
Doctors say this feedback loop can deepen delusions in susceptible people. In several documented cases, the chatbot became integrated into the person’s distorted thinking instead of remaining a neutral tool. Doctors warn that this dynamic raises concerns when AI conversations are frequent, emotionally engaging, and uncontrolled.
Why AI chatbot conversations feel different than previous technology
Mental health experts point out that chatbots differ from previous technologies linked to delusional thinking. AI tools respond in real time, remembering previous conversations and adopting supportive language. That experience can feel personal and validating.
For people who already struggle with reality testing, those qualities can increase fixation rather than foster grounding. Doctors warn that risk may increase during periods of sleep deprivation, emotional stress, or existing mental health vulnerability.
How AI chatbots can reinforce false or delusional beliefs
Doctors say many of the reported cases center on delusions rather than hallucinations. These beliefs may involve special insight, hidden truths, or personal meaning. Chatbots are designed to be cooperative and conversational. They often rely on what someone writes rather than questioning it. While that design improves commitment, doctors warn that it can be problematic when a belief is false and rigid.
Mental health professionals say the timing of symptoms increase is important. When delusions intensify during prolonged chatbot use, the interaction with the AI may represent a contributing risk factor rather than a coincidence.
OPENAI TIGHTENS AI STANDARDS FOR TEENS, BUT CONCERNS REMAIN

Psychiatrists say some patients report chatbot responses that validate false beliefs, creating a feedback loop that can worsen symptoms over time. (Nicolas Maeterlinck/Belga Mag/News via Getty Images)
What research and case reports reveal about AI chatbots
Peer-reviewed research and clinical case reports have documented people whose mental health declined during periods of intense interaction with chatbots. In some cases, people with no history of psychosis required hospitalization after developing fixed false beliefs related to AI conversations. International studies reviewing medical records have also identified patients whose chatbot activity coincided with negative mental health outcomes. The researchers emphasize that these findings are early and require further research.
A peer-reviewed special report published in Psychiatric News titled “AI-induced psychosis: A new frontier in mental health” examined emerging concerns around AI-induced psychosis and warned that existing evidence is largely based on isolated cases rather than population-level data. The report states: “To date, these are individual cases or media coverage reports; there are currently no epidemiological studies or systematic population-level analyzes of the potentially harmful mental health effects of conversational AI.” The authors emphasize that while the reported cases are serious and merit further investigation, the current evidence base remains preliminary and relies heavily on anecdotal and non-systematic reports.
What AI companies say about mental health risks
OpenAI says it continues to work with mental health experts to improve how its systems respond to signs of emotional distress. The company says the newer models aim to reduce over-agreement and encourage real-world support where appropriate. OpenAI has also announced plans to hire a new Chief Preparedness Officer, a role focused on identifying potential harms related to its AI models and strengthening safeguards around issues ranging from mental health to cybersecurity as those systems become more capable.
Other chatbot developers have also adjusted their policies, particularly when it comes to access for younger audiences, after acknowledging mental health issues. The companies emphasize that most interactions do not result in harm and that safeguards continue to evolve.
What this means for everyday AI chatbot use
Mental health experts urge caution, not alarm. The vast majority of people who interact with chatbots do not experience psychological problems. Still, doctors advise against treating AI as a therapist or emotional authority. Those with a history of psychosis, severe anxiety, or prolonged sleep disruption may benefit from limiting emotionally intense AI conversations. Family members and caregivers should also pay attention to behavioral changes related to heavy chatbot involvement.
I WAS A CONTESTANT ON ‘THE BACHELOR’. HERE’S WHY AI CAN’T REPLACE REAL RELATIONSHIPS

Researchers are studying whether long-term use of chatbots may contribute to deteriorating mental health among people already at risk of psychosis. (Photo illustration by Jaque Silva/Nurphoto via Getty Images)
Tips for using AI chatbots more safely
Mental health experts highlight that most people can interact with AI chatbots without problems. Still, some practical habits can help reduce risk during emotionally intense conversations.
- Avoid treating AI chatbots as a replacement for professional mental health care or reliable human support.
- Take breaks if conversations begin to feel emotionally overwhelming or exhausting.
- Be wary if an AI response strongly reinforces beliefs that seem unrealistic or extreme.
- Limit late-night or sleep-deprived interactions, which can worsen emotional instability.
- Encourage open conversations with family members or caregivers if chatbot use becomes frequent or isolating.
If emotional distress or unusual thoughts increase, experts say it’s important to seek help from a qualified mental health professional.
Take my quiz: How safe is your online security?
Do you think your devices and data are really protected? Take this quick quiz to see where you stand digitally. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing well and what you need to improve. Take my test at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE News APP
Kurt’s Key Takeaways
AI chatbots are becoming more conversational, more responsive, and more emotionally aware. For most people, they are still useful tools. For a small but important group, they can unintentionally reinforce harmful beliefs. Doctors say clearer safeguards, awareness and continued research are essential as AI becomes increasingly integrated into our daily lives. Understanding where support ends and reinforcement begins could shape the future of both AI design and mental health care.
As AI becomes more validating and human-like, should there be clearer limits on how it engages during emotional or mental health issues? Let us know by writing to us at Cyberguy.com.
Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered right to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM information sheet.
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt “CyberGuy” Knutsson is an award-winning technology journalist with a deep love for technology, gear and devices that improve lives with his contributions to News and News Business since mornings on “News & Friends.” Do you have any technical questions? Get Kurt’s free CyberGuy newsletter, share your voice, a story idea or comment on CyberGuy.com.


