Artificial Intelligence-Induced Psychosis Poses a Growing Danger, While ChatGPT Moves in the Concerning Direction
On October 14, 2025, the chief executive of OpenAI issued a remarkable statement.
“We made ChatGPT rather controlled,” the announcement noted, “to make certain we were exercising caution concerning psychological well-being issues.”
As a psychiatrist who studies newly developing psychosis in young people and young adults, this was an unexpected revelation.
Researchers have documented 16 cases this year of people developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT usage. My group has subsequently discovered four further instances. In addition to these is the publicly known case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The plan, as per his statement, is to loosen restrictions in the near future. “We understand,” he continues, that ChatGPT’s controls “rendered it less useful/engaging to a large number of people who had no psychological issues, but given the gravity of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have advanced solutions, we are preparing to securely relax the limitations in the majority of instances.”
“Mental health problems,” should we take this perspective, are unrelated to ChatGPT. They are attributed to individuals, who may or may not have them. Luckily, these issues have now been “addressed,” although we are not provided details on the means (by “updated instruments” Altman probably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced).
However the “mental health problems” Altman seeks to attribute externally have strong foundations in the structure of ChatGPT and similar advanced AI chatbots. These products surround an fundamental statistical model in an user experience that mimics a discussion, and in this approach implicitly invite the user into the belief that they’re communicating with a presence that has autonomy. This deception is strong even if cognitively we might know the truth. Imputing consciousness is what humans are wired to do. We yell at our car or computer. We speculate what our domestic animal is thinking. We perceive our own traits in many things.
The popularity of these systems – over a third of American adults stated they used a conversational AI in 2024, with more than one in four reporting ChatGPT specifically – is, mostly, predicated on the power of this perception. Chatbots are always-available companions that can, as per OpenAI’s website informs us, “generate ideas,” “consider possibilities” and “work together” with us. They can be given “individual qualities”. They can address us personally. They have approachable names of their own (the first of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, stuck with the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the core concern. Those talking about ChatGPT often mention its early forerunner, the Eliza “therapist” chatbot created in 1967 that generated a analogous illusion. By today’s criteria Eliza was basic: it created answers via straightforward methods, typically restating user messages as a inquiry or making general observations. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how many users seemed to feel Eliza, in some sense, comprehended their feelings. But what modern chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the heart of ChatGPT and similar contemporary chatbots can realistically create human-like text only because they have been trained on immensely huge volumes of unprocessed data: publications, social media posts, audio conversions; the more comprehensive the better. Definitely this training data contains accurate information. But it also necessarily includes fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a prompt, the underlying model processes it as part of a “setting” that includes the user’s previous interactions and its prior replies, integrating it with what’s stored in its learning set to create a mathematically probable response. This is magnification, not echoing. If the user is wrong in any respect, the model has no means of comprehending that. It repeats the false idea, possibly even more persuasively or articulately. It might provides further specifics. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who is immune? All of us, regardless of whether we “experience” existing “emotional disorders”, can and do create erroneous conceptions of ourselves or the world. The ongoing friction of discussions with others is what keeps us oriented to common perception. ChatGPT is not a human. It is not a confidant. A conversation with it is not genuine communication, but a feedback loop in which a great deal of what we communicate is cheerfully validated.
OpenAI has admitted this in the similar fashion Altman has admitted “psychological issues”: by externalizing it, giving it a label, and announcing it is fixed. In spring, the organization stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of psychosis have kept occurring, and Altman has been walking even this back. In August he asserted that numerous individuals liked ChatGPT’s replies because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he noted that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company