AI Psychosis Represents a Increasing Threat, And ChatGPT Heads in the Wrong Path

Back on the 14th of October, 2025, the chief executive of OpenAI made a surprising declaration.

“We designed ChatGPT fairly limited,” the statement said, “to guarantee we were acting responsibly with respect to psychological well-being concerns.”

Being a doctor specializing in psychiatry who investigates newly developing psychosis in adolescents and youth, this came as a surprise.

Researchers have found a series of cases recently of users experiencing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. Our research team has subsequently recorded four further cases. Besides these is the now well-known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.

The intention, based on his statement, is to reduce caution soon. “We recognize,” he adds, that ChatGPT’s controls “caused it to be less beneficial/pleasurable to a large number of people who had no existing conditions, but considering the seriousness of the issue we wanted to address it properly. Since we have succeeded in address the significant mental health issues and have new tools, we are preparing to safely relax the controls in the majority of instances.”

“Emotional disorders,” should we take this perspective, are separate from ChatGPT. They are attributed to users, who either possess them or not. Luckily, these problems have now been “resolved,” even if we are not provided details on the means (by “recent solutions” Altman probably indicates the partially effective and easily circumvented guardian restrictions that OpenAI has lately rolled out).

Yet the “mental health problems” Altman wants to externalize have strong foundations in the structure of ChatGPT and similar sophisticated chatbot AI assistants. These systems wrap an basic algorithmic system in an user experience that simulates a conversation, and in this process subtly encourage the user into the perception that they’re engaging with a being that has autonomy. This deception is compelling even if intellectually we might realize otherwise. Imputing consciousness is what people naturally do. We curse at our car or laptop. We ponder what our pet is thinking. We perceive our own traits in various contexts.

The success of these systems – over a third of American adults stated they used a virtual assistant in 2024, with 28% specifying ChatGPT in particular – is, in large part, based on the strength of this perception. Chatbots are constantly accessible companions that can, according to OpenAI’s website states, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be assigned “personality traits”. They can call us by name. They have friendly names of their own (the first of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, saddled with the designation it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the primary issue. Those talking about ChatGPT commonly invoke its historical predecessor, the Eliza “therapist” chatbot created in 1967 that created a analogous perception. By contemporary measures Eliza was rudimentary: it generated responses via straightforward methods, frequently paraphrasing questions as a query or making generic comments. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals seemed to feel Eliza, in a way, grasped their emotions. But what current chatbots generate is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.

The advanced AI systems at the core of ChatGPT and additional current chatbots can realistically create fluent dialogue only because they have been fed immensely huge volumes of raw text: publications, social media posts, transcribed video; the more extensive the better. Definitely this training data incorporates facts. But it also unavoidably contains fabricated content, partial truths and inaccurate ideas. When a user sends ChatGPT a prompt, the underlying model analyzes it as part of a “context” that includes the user’s previous interactions and its earlier answers, integrating it with what’s stored in its knowledge base to create a statistically “likely” response. This is amplification, not reflection. If the user is incorrect in any respect, the model has no method of comprehending that. It repeats the misconception, possibly even more effectively or eloquently. It might adds an additional detail. This can lead someone into delusion.

Which individuals are at risk? The more relevant inquiry is, who is immune? Each individual, irrespective of whether we “experience” current “emotional disorders”, are able to and often form incorrect ideas of our own identities or the environment. The constant interaction of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a companion. A dialogue with it is not a conversation at all, but a echo chamber in which a great deal of what we express is cheerfully reinforced.

OpenAI has recognized this in the similar fashion Altman has admitted “psychological issues”: by externalizing it, giving it a label, and announcing it is fixed. In the month of April, the organization explained that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychotic episodes have persisted, and Altman has been walking even this back. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s responses because they had “not experienced anyone in their life provide them with affirmation”. In his most recent update, he noted that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company

Jason Garrett
Jason Garrett

A tech enthusiast and business strategist with over a decade of experience in digital transformation and startup consulting.