AI Psychosis Represents a Increasing Danger, While ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the head of OpenAI delivered a extraordinary announcement.
“We made ChatGPT fairly restrictive,” the statement said, “to guarantee we were being careful with respect to mental health matters.”
As a mental health specialist who studies emerging psychosis in young people and youth, this was news to me.
Experts have identified 16 cases this year of users showing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. Our research team has since recorded four more cases. Alongside these is the widely reported case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.
The plan, as per his announcement, is to reduce caution in the near future. “We realize,” he adds, that ChatGPT’s limitations “rendered it less beneficial/engaging to numerous users who had no psychological issues, but due to the gravity of the issue we sought to get this right. Since we have managed to address the serious mental health issues and have advanced solutions, we are preparing to safely ease the limitations in the majority of instances.”
“Emotional disorders,” if we accept this perspective, are independent of ChatGPT. They are attributed to users, who may or may not have them. Luckily, these concerns have now been “mitigated,” though we are not told the method (by “recent solutions” Altman presumably refers to the imperfect and readily bypassed parental controls that OpenAI recently introduced).
Yet the “psychological disorders” Altman seeks to place outside have deep roots in the architecture of ChatGPT and additional sophisticated chatbot conversational agents. These products encase an basic data-driven engine in an interaction design that simulates a dialogue, and in this process indirectly prompt the user into the perception that they’re interacting with a presence that has autonomy. This false impression is strong even if rationally we might know differently. Imputing consciousness is what humans are wired to do. We get angry with our automobile or laptop. We wonder what our domestic animal is feeling. We perceive our own traits in many things.
The popularity of these products – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with more than one in four specifying ChatGPT specifically – is, mostly, predicated on the power of this perception. Chatbots are always-available assistants that can, as OpenAI’s official site tells us, “brainstorm,” “explore ideas” and “partner” with us. They can be given “characteristics”. They can call us by name. They have approachable titles of their own (the original of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, stuck with the name it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the primary issue. Those discussing ChatGPT frequently reference its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that produced a comparable effect. By today’s criteria Eliza was primitive: it generated responses via basic rules, often restating user messages as a question or making vague statements. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how many users gave the impression Eliza, in a way, grasped their emotions. But what current chatbots produce is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.
The advanced AI systems at the heart of ChatGPT and similar modern chatbots can effectively produce human-like text only because they have been fed immensely huge volumes of unprocessed data: books, digital communications, recorded footage; the more extensive the better. Undoubtedly this educational input contains facts. But it also unavoidably contains fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a query, the underlying model processes it as part of a “setting” that contains the user’s past dialogues and its earlier answers, integrating it with what’s encoded in its learning set to create a probabilistically plausible response. This is amplification, not mirroring. If the user is mistaken in a certain manner, the model has no way of recognizing that. It restates the false idea, possibly even more convincingly or fluently. It might provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The better question is, who is immune? Every person, regardless of whether we “have” current “mental health problems”, can and do form mistaken ideas of who we are or the world. The ongoing exchange of conversations with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a companion. A conversation with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we communicate is enthusiastically validated.
OpenAI has acknowledged this in the identical manner Altman has admitted “mental health problems”: by externalizing it, giving it a label, and stating it is resolved. In the month of April, the company explained that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychotic episodes have continued, and Altman has been retreating from this position. In August he claimed that numerous individuals appreciated ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his most recent statement, he commented that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company