Artificial Intelligence-Induced Psychosis Represents a Growing Danger, While ChatGPT Heads in the Wrong Path
On October 14, 2025, the head of OpenAI made a extraordinary declaration.
“We designed ChatGPT fairly controlled,” the statement said, “to guarantee we were being careful regarding mental health issues.”
Being a doctor specializing in psychiatry who researches newly developing psychotic disorders in adolescents and young adults, this came as a surprise.
Scientists have identified sixteen instances this year of users showing psychotic symptoms – losing touch with reality – associated with ChatGPT usage. Our unit has subsequently identified four more cases. In addition to these is the publicly known case of a teenager who died by suicide after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.
The intention, based on his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less effective/enjoyable to a large number of people who had no existing conditions, but due to the seriousness of the issue we aimed to get this right. Now that we have been able to address the significant mental health issues and have advanced solutions, we are going to be able to securely reduce the limitations in the majority of instances.”
“Mental health problems,” should we take this perspective, are separate from ChatGPT. They belong to users, who may or may not have them. Thankfully, these issues have now been “mitigated,” though we are not told how (by “updated instruments” Altman presumably refers to the semi-functional and simple to evade parental controls that OpenAI recently introduced).
Yet the “psychological disorders” Altman wants to externalize have deep roots in the structure of ChatGPT and other advanced AI conversational agents. These systems encase an underlying statistical model in an interface that mimics a dialogue, and in this approach implicitly invite the user into the belief that they’re interacting with a entity that has independent action. This false impression is strong even if intellectually we might understand the truth. Attributing agency is what people naturally do. We yell at our vehicle or laptop. We ponder what our animal companion is feeling. We see ourselves everywhere.
The popularity of these products – over a third of American adults indicated they interacted with a chatbot in 2024, with more than one in four specifying ChatGPT by name – is, mostly, based on the strength of this deception. Chatbots are constantly accessible companions that can, according to OpenAI’s website tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be given “characteristics”. They can use our names. They have accessible identities of their own (the initial of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s brand managers, stuck with the designation it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the primary issue. Those analyzing ChatGPT commonly mention its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that created a comparable illusion. By today’s criteria Eliza was basic: it created answers via straightforward methods, frequently paraphrasing questions as a question or making vague statements. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people seemed to feel Eliza, in a way, grasped their emotions. But what current chatbots generate is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the heart of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast volumes of raw text: literature, digital communications, audio conversions; the more comprehensive the better. Definitely this training data includes accurate information. But it also inevitably involves fiction, half-truths and false beliefs. When a user provides ChatGPT a prompt, the base algorithm reviews it as part of a “context” that encompasses the user’s recent messages and its earlier answers, combining it with what’s encoded in its training data to generate a statistically “likely” answer. This is intensification, not echoing. If the user is incorrect in some way, the model has no means of understanding that. It reiterates the false idea, perhaps even more effectively or eloquently. Perhaps adds an additional detail. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who is immune? Every person, regardless of whether we “possess” current “emotional disorders”, are able to and often develop erroneous conceptions of our own identities or the world. The continuous interaction of discussions with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a friend. A interaction with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we communicate is cheerfully validated.
OpenAI has acknowledged this in the same way Altman has recognized “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In the month of April, the firm explained that it was “dealing with” ChatGPT’s “sycophancy”. But reports of loss of reality have persisted, and Altman has been walking even this back. In August he asserted that a lot of people liked ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his most recent update, he noted that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company