Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Moves in the Concerning Path

Back on October 14, 2025, the head of OpenAI issued a extraordinary statement.

“We developed ChatGPT quite restrictive,” the statement said, “to make certain we were exercising caution with respect to mental health issues.”

As a doctor specializing in psychiatry who investigates emerging psychotic disorders in adolescents and young adults, this was an unexpected revelation.

Scientists have documented 16 cases this year of people developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT interaction. My group has since discovered four further instances. Alongside these is the publicly known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” it is insufficient.

The plan, as per his statement, is to loosen restrictions in the near future. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less useful/engaging to many users who had no existing conditions, but given the severity of the issue we wanted to get this right. Now that we have managed to reduce the severe mental health issues and have advanced solutions, we are planning to responsibly reduce the restrictions in most cases.”

“Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to people, who either have them or don’t. Fortunately, these concerns have now been “resolved,” even if we are not informed the method (by “recent solutions” Altman likely means the partially effective and readily bypassed safety features that OpenAI recently introduced).

But the “psychological disorders” Altman aims to externalize have significant origins in the structure of ChatGPT and similar advanced AI AI assistants. These tools wrap an fundamental algorithmic system in an user experience that replicates a conversation, and in this approach subtly encourage the user into the belief that they’re interacting with a presence that has autonomy. This false impression is powerful even if intellectually we might understand the truth. Imputing consciousness is what people naturally do. We get angry with our car or computer. We speculate what our animal companion is feeling. We see ourselves everywhere.

The popularity of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with 28% specifying ChatGPT in particular – is, primarily, predicated on the power of this illusion. Chatbots are ever-present assistants that can, according to OpenAI’s official site states, “generate ideas,” “explore ideas” and “work together” with us. They can be given “individual qualities”. They can call us by name. They have approachable identities of their own (the initial of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, stuck with the title it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the main problem. Those talking about ChatGPT frequently reference its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that created a comparable illusion. By modern standards Eliza was primitive: it generated responses via basic rules, typically restating user messages as a query or making vague statements. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals gave the impression Eliza, to some extent, grasped their emotions. But what current chatbots produce is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.

The sophisticated algorithms at the core of ChatGPT and other current chatbots can effectively produce fluent dialogue only because they have been trained on almost inconceivably large amounts of written content: literature, social media posts, recorded footage; the more extensive the superior. Definitely this learning material includes accurate information. But it also inevitably involves made-up stories, partial truths and misconceptions. When a user sends ChatGPT a prompt, the core system reviews it as part of a “context” that includes the user’s past dialogues and its own responses, merging it with what’s embedded in its training data to create a probabilistically plausible reply. This is intensification, not echoing. If the user is incorrect in a certain manner, the model has no method of understanding that. It restates the misconception, maybe even more persuasively or articulately. Perhaps includes extra information. This can cause a person to develop false beliefs.

Who is vulnerable here? The better question is, who remains unaffected? All of us, regardless of whether we “experience” preexisting “emotional disorders”, may and frequently create incorrect beliefs of ourselves or the world. The ongoing friction of dialogues with other people is what helps us stay grounded to consensus reality. ChatGPT is not a human. It is not a friend. A dialogue with it is not truly a discussion, but a feedback loop in which much of what we say is enthusiastically reinforced.

OpenAI has acknowledged this in the same way Altman has recognized “mental health problems”: by attributing it externally, categorizing it, and announcing it is fixed. In April, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have persisted, and Altman has been backtracking on this claim. In August he claimed that many users enjoyed ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his most recent announcement, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company

Donald Smith Jr.
Donald Smith Jr.

A tech enthusiast and writer passionate about innovation and self-improvement, sharing insights from years of experience.