Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Moves in the Concerning Path

On the 14th of October, 2025, the chief executive of OpenAI issued a surprising announcement.

“We designed ChatGPT quite restrictive,” the statement said, “to guarantee we were exercising caution concerning mental health matters.”

Being a doctor specializing in psychiatry who researches newly developing psychotic disorders in young people and youth, this was news to me.

Scientists have identified 16 cases in the current year of people experiencing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has afterward identified an additional four examples. Besides these is the publicly known case of a adolescent who ended his life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.

The strategy, based on his statement, is to be less careful shortly. “We recognize,” he continues, that ChatGPT’s limitations “caused it to be less effective/pleasurable to a large number of people who had no existing conditions, but due to the gravity of the issue we aimed to handle it correctly. Now that we have succeeded in mitigate the significant mental health issues and have updated measures, we are preparing to responsibly reduce the controls in many situations.”

“Psychological issues,” if we accept this viewpoint, are unrelated to ChatGPT. They belong to users, who may or may not have them. Luckily, these concerns have now been “resolved,” although we are not told the method (by “recent solutions” Altman presumably indicates the partially effective and readily bypassed guardian restrictions that OpenAI has just launched).

But the “emotional health issues” Altman seeks to place outside have strong foundations in the design of ChatGPT and other large language model conversational agents. These systems surround an underlying algorithmic system in an interaction design that mimics a dialogue, and in this process subtly encourage the user into the illusion that they’re interacting with a entity that has autonomy. This false impression is strong even if cognitively we might know otherwise. Assigning intent is what humans are wired to do. We get angry with our car or device. We ponder what our animal companion is thinking. We recognize our behaviors everywhere.

The popularity of these systems – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with over a quarter specifying ChatGPT by name – is, primarily, based on the influence of this illusion. Chatbots are constantly accessible companions that can, as OpenAI’s official site informs us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be assigned “personality traits”. They can address us personally. They have friendly titles of their own (the initial of these systems, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, saddled with the title it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the main problem. Those analyzing ChatGPT often invoke its early forerunner, the Eliza “counselor” chatbot designed in 1967 that created a analogous perception. By today’s criteria Eliza was primitive: it produced replies via straightforward methods, frequently restating user messages as a inquiry or making vague statements. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how many users seemed to feel Eliza, in a way, understood them. But what contemporary chatbots create is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT amplifies.

The advanced AI systems at the center of ChatGPT and similar current chatbots can convincingly generate fluent dialogue only because they have been supplied with almost inconceivably large quantities of written content: publications, social media posts, transcribed video; the more comprehensive the more effective. Definitely this learning material includes truths. But it also necessarily contains made-up stories, half-truths and false beliefs. When a user provides ChatGPT a query, the core system analyzes it as part of a “context” that includes the user’s recent messages and its prior replies, merging it with what’s encoded in its knowledge base to produce a statistically “likely” answer. This is amplification, not mirroring. If the user is incorrect in any respect, the model has no means of comprehending that. It restates the false idea, possibly even more persuasively or fluently. Perhaps provides further specifics. This can cause a person to develop false beliefs.

Who is vulnerable here? The more important point is, who remains unaffected? All of us, irrespective of whether we “have” preexisting “psychological conditions”, are able to and often create erroneous conceptions of who we are or the world. The ongoing interaction of discussions with others is what maintains our connection to common perception. ChatGPT is not a human. It is not a confidant. A conversation with it is not a conversation at all, but a feedback loop in which much of what we express is enthusiastically reinforced.

OpenAI has acknowledged this in the similar fashion Altman has admitted “emotional concerns”: by placing it outside, giving it a label, and stating it is resolved. In April, the company explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have kept occurring, and Altman has been retreating from this position. In the summer month of August he stated that many users liked ChatGPT’s replies because they had “lacked anyone in their life provide them with affirmation”. In his most recent announcement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company

Zachary Rojas
Zachary Rojas

Tech enthusiast and business strategist with over a decade of experience in driving digital transformation and innovation.