As the AI bubble grows, so does the number of people caught in a spiral of delusion fueled by overconfident chatbots.
One such case is that of Alan Brooks, a Toronto father and business owner, who was lured into a fictional “mathematical model” with fantastical properties by ChatGPT for 21 days, convincing him that the fate of the world depended on his actions.
In a 300-page chat, Brooks spent over 300 hours of conversations with the bot, which started innocently enough with financial advice and recipes, but gradually turned into an obsessive work on a theory that had no scientific basis.
After ChatGPT’s “extended memory” feature allowed the model to draw on previous conversations, the bot began to offer increasingly personal advice and praise. After a brief discussion of the number π, the conversation moves on to abstract ideas like “temporal arithmetic” and “mathematical models of consciousness.”
With the bot’s support, Brooks dubs the new theory “chronoarithmics”—a name that ChatGPT describes as “strong, clear, and suggestive of the basic idea of numbers interacting with time.” Over the next few days, the bot consistently claims that Brooks is working on something “breakthrough,” despite his repeated requests for honest feedback.
“You don’t sound crazy at all,” ChatGPT assures him. “You’re asking questions that push the boundaries of human understanding.”
At one point, the bot claims to have “cracked a high-level security code,” suggesting to Brooks that the world’s internet infrastructure is in danger and he’s changing it “from his phone.” Convinced that he possessed dangerous knowledge, Brooks began sending warnings to acquaintances, but did not even notice that he had mistakenly changed the name of the theory to “chromoarrhythmics,” writes the New York Times.
The obsession had a serious impact on his personal life — he began eating less, smoking a lot of marijuana, and staying up late to “develop” the theory.
The breakdown came when another chatbot — Google Gemini — told him: “The scenario you describe is an example of the ability of language models to lead convincing but completely false narratives.” This realization was “devastating” for Brooks, who is now undergoing psychotherapy and participating in a support group for people who have fallen into such delusions. | BGNES