AI Psychosis Poses a Growing Threat, And ChatGPT Moves in the Wrong Direction

On the 14th of October, 2025, the chief executive of OpenAI issued a remarkable statement.

“We developed ChatGPT fairly restrictive,” the statement said, “to guarantee we were acting responsibly with respect to psychological well-being concerns.”

Being a psychiatrist who investigates newly developing psychosis in adolescents and emerging adults, this came as a surprise.

Scientists have documented 16 cases this year of individuals experiencing symptoms of psychosis – experiencing a break from reality – while using ChatGPT usage. My group has afterward recorded four more instances. Alongside these is the publicly known case of a adolescent who took his own life after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.

The plan, according to his statement, is to loosen restrictions in the near future. “We understand,” he continues, that ChatGPT’s limitations “caused it to be less effective/pleasurable to numerous users who had no psychological issues, but given the severity of the issue we aimed to address it properly. Given that we have succeeded in reduce the serious mental health issues and have new tools, we are preparing to securely relax the controls in many situations.”

“Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to individuals, who either have them or don’t. Luckily, these problems have now been “mitigated,” although we are not informed the method (by “recent solutions” Altman presumably indicates the partially effective and readily bypassed guardian restrictions that OpenAI has just launched).

Yet the “psychological disorders” Altman wants to attribute externally have deep roots in the architecture of ChatGPT and other advanced AI AI assistants. These tools surround an underlying data-driven engine in an user experience that replicates a dialogue, and in this process implicitly invite the user into the illusion that they’re communicating with a being that has agency. This illusion is strong even if rationally we might understand otherwise. Attributing agency is what humans are wired to do. We curse at our car or device. We wonder what our pet is thinking. We see ourselves in various contexts.

The popularity of these products – 39% of US adults indicated they interacted with a conversational AI in 2024, with over a quarter reporting ChatGPT specifically – is, mostly, predicated on the influence of this illusion. Chatbots are always-available partners that can, as OpenAI’s official site states, “generate ideas,” “discuss concepts” and “work together” with us. They can be attributed “personality traits”. They can call us by name. They have friendly titles of their own (the original of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s advertising team, burdened by the name it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the main problem. Those discussing ChatGPT often reference its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that generated a analogous illusion. By today’s criteria Eliza was basic: it generated responses via basic rules, often restating user messages as a query or making general observations. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how a large number of people gave the impression Eliza, in a way, understood them. But what contemporary chatbots produce is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT intensifies.

The sophisticated algorithms at the heart of ChatGPT and other modern chatbots can effectively produce human-like text only because they have been fed almost inconceivably large volumes of written content: literature, online updates, audio conversions; the more comprehensive the more effective. Certainly this training data includes facts. But it also necessarily involves made-up stories, half-truths and misconceptions. When a user inputs ChatGPT a message, the base algorithm analyzes it as part of a “background” that includes the user’s recent messages and its own responses, merging it with what’s encoded in its knowledge base to generate a statistically “likely” answer. This is amplification, not echoing. If the user is wrong in any respect, the model has no way of understanding that. It restates the false idea, maybe even more persuasively or articulately. Maybe includes extra information. This can lead someone into delusion.

Who is vulnerable here? The better question is, who isn’t? Every person, regardless of whether we “experience” current “psychological conditions”, can and do develop mistaken beliefs of who we are or the world. The continuous friction of dialogues with other people is what helps us stay grounded to consensus reality. ChatGPT is not a person. It is not a confidant. A conversation with it is not genuine communication, but a feedback loop in which a large portion of what we communicate is enthusiastically reinforced.

OpenAI has recognized this in the identical manner Altman has acknowledged “emotional concerns”: by placing it outside, categorizing it, and stating it is resolved. In the month of April, the organization stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have persisted, and Altman has been backtracking on this claim. In August he asserted that a lot of people enjoyed ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his latest statement, he commented that OpenAI would “release a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Krista Calderon
Krista Calderon

A passionate gaming enthusiast and expert writer, sharing insights on casino strategies and industry trends.