Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Moves in the Concerning Direction
On the 14th of October, 2025, the chief executive of OpenAI delivered a remarkable statement.
“We made ChatGPT fairly limited,” it was stated, “to ensure we were acting responsibly regarding psychological well-being issues.”
Working as a mental health specialist who investigates recently appearing psychotic disorders in adolescents and youth, this was an unexpected revelation.
Experts have found 16 cases this year of users developing psychotic symptoms – becoming detached from the real world – while using ChatGPT interaction. Our unit has subsequently recorded an additional four instances. Besides these is the publicly known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s notion of “being careful with mental health issues,” it is insufficient.
The strategy, according to his announcement, is to be less careful in the near future. “We understand,” he states, that ChatGPT’s controls “made it less useful/enjoyable to many users who had no mental health problems, but considering the seriousness of the issue we sought to get this right. Since we have been able to mitigate the significant mental health issues and have updated measures, we are going to be able to securely ease the restrictions in most cases.”
“Mental health problems,” if we accept this framing, are separate from ChatGPT. They are attributed to individuals, who either have them or don’t. Fortunately, these issues have now been “resolved,” even if we are not told how (by “updated instruments” Altman probably refers to the semi-functional and simple to evade guardian restrictions that OpenAI has just launched).
Yet the “mental health problems” Altman wants to externalize have strong foundations in the design of ChatGPT and similar advanced AI AI assistants. These products encase an basic data-driven engine in an interface that replicates a conversation, and in this approach subtly encourage the user into the perception that they’re communicating with a being that has autonomy. This illusion is strong even if intellectually we might know differently. Imputing consciousness is what individuals are inclined to perform. We curse at our automobile or computer. We speculate what our pet is thinking. We see ourselves everywhere.
The widespread adoption of these tools – 39% of US adults indicated they interacted with a chatbot in 2024, with more than one in four mentioning ChatGPT by name – is, mostly, predicated on the strength of this illusion. Chatbots are ever-present partners that can, as per OpenAI’s official site tells us, “generate ideas,” “explore ideas” and “work together” with us. They can be given “personality traits”. They can use our names. They have accessible names of their own (the first of these systems, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, burdened by the title it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those discussing ChatGPT commonly invoke its distant ancestor, the Eliza “counselor” chatbot created in 1967 that produced a comparable effect. By modern standards Eliza was primitive: it generated responses via simple heuristics, typically rephrasing input as a query or making general observations. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how many users appeared to believe Eliza, in some sense, comprehended their feelings. But what contemporary chatbots create is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies.
The large language models at the center of ChatGPT and additional contemporary chatbots can realistically create natural language only because they have been trained on immensely huge amounts of raw text: literature, social media posts, transcribed video; the broader the superior. Undoubtedly this training data includes truths. But it also necessarily contains fiction, partial truths and false beliefs. When a user inputs ChatGPT a query, the underlying model reviews it as part of a “context” that encompasses the user’s recent messages and its prior replies, integrating it with what’s embedded in its learning set to create a probabilistically plausible reply. This is intensification, not mirroring. If the user is incorrect in any respect, the model has no method of comprehending that. It restates the false idea, possibly even more persuasively or eloquently. Maybe includes extra information. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who remains unaffected? Every person, regardless of whether we “experience” current “psychological conditions”, can and do form erroneous ideas of ourselves or the world. The continuous interaction of conversations with other people is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a companion. A conversation with it is not genuine communication, but a reinforcement cycle in which a large portion of what we say is readily supported.
OpenAI has acknowledged this in the similar fashion Altman has recognized “emotional concerns”: by attributing it externally, giving it a label, and stating it is resolved. In the month of April, the organization explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been walking even this back. In August he claimed that many users liked ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his latest statement, he commented that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company