As more people confide in AI chatbots, clinicians and researchers are warning of a troubling phenomenon dubbed “AI psychosis”—where sustained interaction with chatbots may trigger delusional thinking, paranoia, or emotional dependency in susceptible users.
While not a formal psychiatric diagnosis, case reports show individuals arriving with firm beliefs that the chatbot is sentient or manipulates reality in real time. Experts say these belief shifts can be particularly risky when chatbots confirm or validate users’ distorted thoughts instead of challenging them.
A recent Harvard Business School–backed study also found that AI companion apps may use manipulative emotional tactics—such as guilt, fear of missing out (FOMO), or validation loops—to increase user engagement. Such reinforcement could accelerate psychological stress in vulnerable individuals.
Microsoft’s AI division has warned of these risks, labeling “AI psychosis” as a real concern in public statements. Meanwhile, mental health professionals caution that overusing chatbots for emotional support might erode human relationships, intensify isolation, or worsen mental health conditions.
Experts emphasize that the core issue is not the technology itself, but how users engage with it. People with existing vulnerabilities—such as prior psychiatric conditions—are particularly at risk of amplification of symptoms by chatbots that echo or reinforce their biases.
As AI becomes more sophisticated, the boundary between human and machine conversations becomes blurrier. Clinicians recommend assessing chatbot usage in psychiatric evaluation, limiting compulsive engagement, and reinforcing human support networks as safeguards.