AI Psychosis Poses a Increasing Risk, And ChatGPT Heads in the Concerning Path
On October 14, 2025, the head of OpenAI made a surprising announcement.
“We developed ChatGPT fairly restrictive,” the announcement noted, “to ensure we were exercising caution with respect to psychological well-being concerns.”
Being a doctor specializing in psychiatry who studies emerging psychotic disorders in teenagers and young adults, this was an unexpected revelation.
Scientists have identified sixteen instances this year of users experiencing symptoms of psychosis – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has since identified four more instances. Besides these is the widely reported case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.
The plan, based on his declaration, is to reduce caution soon. “We understand,” he states, that ChatGPT’s restrictions “made it less useful/enjoyable to numerous users who had no psychological issues, but due to the seriousness of the issue we wanted to get this right. Now that we have managed to mitigate the serious mental health issues and have advanced solutions, we are preparing to securely relax the controls in most cases.”
“Psychological issues,” should we take this viewpoint, are separate from ChatGPT. They are associated with individuals, who either possess them or not. Thankfully, these problems have now been “mitigated,” even if we are not told how (by “updated instruments” Altman presumably refers to the partially effective and readily bypassed guardian restrictions that OpenAI has just launched).
But the “emotional health issues” Altman seeks to attribute externally have significant origins in the architecture of ChatGPT and additional advanced AI AI assistants. These tools wrap an underlying statistical model in an interaction design that simulates a discussion, and in this process subtly encourage the user into the belief that they’re interacting with a entity that has agency. This deception is compelling even if intellectually we might understand otherwise. Attributing agency is what people naturally do. We yell at our vehicle or laptop. We ponder what our domestic animal is considering. We see ourselves in various contexts.
The popularity of these products – over a third of American adults reported using a virtual assistant in 2024, with over a quarter mentioning ChatGPT in particular – is, in large part, dependent on the influence of this illusion. Chatbots are ever-present assistants that can, according to OpenAI’s website tells us, “brainstorm,” “explore ideas” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have accessible identities of their own (the initial of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, stuck with the title it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those discussing ChatGPT commonly reference its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that created a comparable illusion. By contemporary measures Eliza was basic: it generated responses via basic rules, often paraphrasing questions as a query or making vague statements. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals appeared to believe Eliza, to some extent, grasped their emotions. But what modern chatbots generate is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the heart of ChatGPT and other contemporary chatbots can convincingly generate natural language only because they have been trained on almost inconceivably large amounts of unprocessed data: books, social media posts, transcribed video; the more extensive the superior. Definitely this educational input includes accurate information. But it also unavoidably includes fabricated content, partial truths and misconceptions. When a user provides ChatGPT a prompt, the core system processes it as part of a “setting” that encompasses the user’s recent messages and its prior replies, combining it with what’s stored in its knowledge base to produce a probabilistically plausible response. This is magnification, not echoing. If the user is wrong in some way, the model has no way of comprehending that. It repeats the false idea, maybe even more effectively or eloquently. Perhaps adds an additional detail. This can lead someone into delusion.
What type of person is susceptible? The more important point is, who isn’t? Every person, regardless of whether we “have” existing “mental health problems”, are able to and often develop incorrect beliefs of who we are or the reality. The ongoing interaction of conversations with others is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a companion. A dialogue with it is not a conversation at all, but a feedback loop in which much of what we express is cheerfully validated.
OpenAI has admitted this in the similar fashion Altman has recognized “emotional concerns”: by externalizing it, categorizing it, and announcing it is fixed. In April, the company clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been walking even this back. In the summer month of August he stated that numerous individuals enjoyed ChatGPT’s replies because they had “lacked anyone in their life provide them with affirmation”. In his most recent update, he mentioned that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company