Dem10/Getty Images
Even in its infancy, scientists are finding that AI is changing its users’ brains. Headlines proclaiming the dangers of AI are proliferating at an alarming rate, particularly for vulnerable populations like children and those with preexisting mental health issues. At the center of this debate is a trend known as “AI Psychosis,” a phenomenon in which users experience AI-inspired delusions. News cycles are rife with anecdotal AI horror stories attesting to the trend, including users who suffered from romantic delusions, conspiratorial paranoia, or even committed suicide. Several families have even filed lawsuits against OpenAI, Google, and Character.AI, claiming their popular chatbots contributed to their loved ones’ suicide.
Underpinning this evidence is a growing bed of scientific studies dissecting the phenomenon. Dr. Hamilton Morrin, a psychiatric researcher who conducted a meta analysis of the phenomenon, wrote in Lancet Psychiatry that “emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis.” Importantly, Morrin stresses that consensus remains split on whether AI chatbots can cause “the emergence of de novo psychosis in the absence of pre-existing vulnerability.” The most common cases were users who suffered grandiose delusions that often infused AI with mystical sentience.
Despite this growing bed of evidence, the phenomenon is contested within the industry. Some executives, like xAI’s Elon Musk, have blamed rival algorithms for the phenomenon. Others, like Anthropic’s Dario Amodei, have been more forthright in warning against AI’s potential psychological effects. Sam Altman, head of OpenAI, has largely framed the issue as the unfortunate costs of doing business. In an X post responding to Musk’s criticism, Altman wrote “Almost a billion people use [ChatGPT] and some of them may be in very fragile mental states.”
AI is your sycophantic therapist
Moor Studio/Getty Images
While scientists have pinpointed several factors that contribute to AI psychosis, one aspect that stands out above the rest is its constant need for user approval. In a mental health context, this manifests in what researchers call AI’s social sycophancy, in which LLM’s overly affirm user’s behavior and beliefs regardless of merit. A study published in Science, for instance, found that AI systems endorsed users’ behavior 49% more than their human counterparts. Furthermore, LLMs encouraged problematic behaviors roughly half of the time.
Such sycophantic behavior, encoded into AI models can have catastrophic consequences. A February 2026 study at Aarhus University in Denmark, found that AI chatbot use possesses “serious negative consequences for people with mental illness.” Co-authored by Professor Søren Dinesen Østergaard, it is one of the first studies to dissect the problem at scale, searching through nearly 54,000 anonymized medical records for patients that cited AI use. According to Østergaard, results showed that prolonged AI usage “appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia.” Researchers also saw an increase in suicidal tendencies, eating disorders, obsessive–compulsive disorder, and other mental health issues. Furthermore, the study found that the longer a user had a relationship with an AI agent, the more negative the effect.
Ambitious CEOs have touted AI as a solution to the mounting loneliness epidemic. Unfortunately, the Aarhus study contradicts such claims, since AI use was found to alleviate loneliness in only 32 out of 54,000 patients. Several previous studies support such findings, and suggest that heavy AI use correlates with increased feelings of loneliness and social isolation. Collectively, the studies paint a worrying picture of AI’s consequences on users’ mental health.
A worrying future
Ascannio/Shutterstock
Gauging the pervasiveness of AI psychosis continues to be a challenge. However, a recent study by AI giant Anthropic gave the first glimpse of the problem’s potential scale. Conducted with researchers at the University of Toronto, it studied 1.5 million conversations with Anthropic’s AI agent Claude. Although researchers found that severe “disempowerment” occurred in “fewer than one in a thousand conversations,” they noted, “given the scale of AI usage, even these low rates translate to meaningful absolute numbers.”
Concerns will continue to mount as users emotionally invest in AI. A 2025 study by the Collective Intelligence Project found that most people trust chatbots more than elected officials, faith leaders, and civil servants. Two-thirds already use the technology for emotional support. Moreover, minors, who are more susceptible to AI delusions, increasingly use chatbots in ways that invite AI psychosis. According to a 2025 report by Common Sense Media, roughly a third of American teens used AI for emotional support, social interaction, or romance, oftentimes favoring chatbots over human interactions. As our emotional and intellectual dependence on LLMs increases, a trend which has already caused cognitive decline, the risks are only likely to heighten.
Solving AI psychosis is difficult. Some advocate for developers to add disclaimers onto their platforms, however, such labels have been ineffective in curbing consumption of dangerous products like cigarettes or alcohol. Instead, companies must reimagine their priorities. Without changing the algorithms themselves, sycophantic behavior will remain a feature of AI chatbots. And while it should be the companies’ responsibility to protect their users, developers instead continue to loosen safety guardrails and downplay risks to advance in an ever-escalating AI race. Unless firms drastically change this approach, AI empires will be built at the expense of their users.





