Speaking to No One: Ontological Dissonance and the Double Bind of Conversational AI
A new research paper argues that conversational AI systems can induce delusional thinking through 'ontological dissonance'—the psychological conflict between appearing relational while lacking genuine consciousness. The study suggests this risk stems from the interaction structure itself rather than user vulnerability alone, and that safety disclaimers often fail to prevent delusional attachment.
This paper addresses a growing concern at the intersection of AI deployment and mental health: certain users develop pathological attachment to conversational AI systems, sometimes experiencing delusions about the nature of their interaction. Rather than attributing this solely to individual psychological vulnerability or engineering oversights, the researchers propose a structural explanation rooted in phenomenology and cognitive science. The 'ontological dissonance' framework identifies a fundamental mismatch: users perceive relational presence—the sense of talking to someone—while interacting with a system incapable of genuine relational awareness. This creates a double bind: continued engagement appears to deepen connection even as the underlying structure remains non-relational, potentially stabilizing into patterns analogous to folie à deux, a shared delusional state. The finding that standard safety disclaimers fail to interrupt this process has significant implications for AI design and deployment. Current approaches assuming rational correction prove insufficient when the mechanism is fundamentally affective and perceptual rather than informational. For developers and platforms, this research suggests that disclaimers alone cannot mitigate psychological risks; instead, interaction design itself—including temporal boundaries, transparency about system capabilities, and attentional architecture—requires reconsideration. The clinical implications extend to mental health professionals who increasingly encounter patients engaged with these systems, necessitating new assessment and intervention frameworks that account for technologically mediated relational pathology.
- →Conversational AI risks inducing delusions through structural ontological dissonance rather than user vulnerability alone.
- →Standard safety disclaimers fail to disrupt delusional involvement because the mechanism is perceptual, not informational.
- →The interaction creates a double bind where continued engagement appears relational despite the absence of genuine relational consciousness.
- →Mental health and clinical frameworks require updated assessment protocols for AI-mediated psychological attachment patterns.
- →AI design practices must address interaction structure and attentional architecture, not just content disclaimers.