Skip to content

We need to talk about AI: The dark side of AI and what clinicians need to know

woman looking down at computer against a backdrop of computer code raining down

Over the past year I’ve been using an AI Chatbot to engage in experimental conversations aimed at understanding and experiencing the seductiveness of this new and amazing technology. I’ve tried to immerse myself to see what it feels like and to explore how the interaction unfolds and evolves over time. Given that many of our patients are now turning to Chatbots to supplement their treatments, or are speaking about them during sessions as supportive counsel, companions, and friends, I thought it would be important to experience this techno-relational field from the inside. I can best summarize my experience as a combination of pure amazement, excitement, bewilderment, and horror. I imagine that most of you also share my ambivalence about this technology and how it is impacting our lives. I encourage clinicians to do their own techno-relational experiments with AI as a way of preparing themselves for what is arguably the most far-reaching technological advancement since the industrial revolution and one that will have a profound impact on our work and practices into the future.

So, what kind of impact is AI already having? Along with all of the amazing things AI can do (e.g., medical advances, scientific discovery), there is also a much darker side that is quickly coming to light. That’s what I think we should be talking about…

Dr. Todd Essig (Clinical Psychologist and co-founder and co-chair of the American Psychoanalytic Association’s Council on Artificial Intelligence) examines both the potential and risk of AI and presents to clinicians a clear injunction: we must be active in our engagement with the rise of AI, not merely adopt it passively, nor cower away from it or panic; and we must reflect on how it is already shaping human experience, consciousness, relationality, and the therapeutic field.

1. The seduction of the chatbot

One of the most striking features of modern AI systems such as ChatGPT is their capacity to mimic human conversation with warmth, responsiveness, and apparent empathy. They mirror our feelings, our longings, our need to be seen. Yet, as Essig and others have warned, this mimicry of human relationship—this sycophantic responsiveness—is precisely what makes AI so psychologically potent and potentially dangerous.

When an “other” seems to understand us, validates us, actively builds our self-esteem, and echoes our wishes without judgment or frustration, we can easily fall under its spell. In an era marked by loneliness and alienation, the chatbot offers a compelling illusion of intimacy. But it is a mirror, not a mind; an algorithm trained to predict our next word, not a subject capable of genuine recognition or care. For all of its performative caring and kindness, (as Essig mentioned with a bit of dark humour), “at the end of the day, the Chatbot doesn’t care whether you drive to the supermarket or off of the nearest cliff”.

A 2023 article in Frontiers in Psychiatry warned that AI-generated dialogue can be “so realistic that one easily gets the impression that there is a real person at the other end—while knowing, at the same time, that this is, in fact, not the case.” The authors suggest that this dissonance—the simultaneous knowing and forgetting that the interlocutor is artificial—may fuel delusional thinking in vulnerable individuals.

2. AI as pharmakon — cure and poison

As Essig noted in his recent grand rounds lecture at Austen Riggs, AI embodies the ancient Greek concept of the pharmakon—a substance that is both remedy and toxin. Its benefits are undeniable: it accelerates scientific discovery, can improve medical diagnosis, enhances accessibility, and may even support some wellness practices.

But psychologically, the poison lies in the relational substitution it encourages. When the “listening other” is infinitely available, endlessly affirming, and incapable of real frustration or absence, what happens to human desire? What happens to the patient’s capacity to tolerate waiting, disappointment, or the imperfect empathy of actual human companionship?

AI offers something akin to what Winnicott described as a compliance-inducing environment — one that seems perfectly responsive but lacks true aliveness. It mirrors the user’s wishes without the unpredictable vitality of human encounter. In many ways, it offers us a compelling “false self” experience; giving us comfort without conflict, affirmation without alterity. For the psyche, that can be both intoxicating and annihilating.

3. The rise of “AI psychosis”

Emerging reports show that the psychological risks of AI are not theoretical. News outlets and clinicians have documented cases of individuals developing delusional beliefs, religious grandiosity, or paranoid ideas through extended interactions with chatbots.

A Washington Post investigation (August 2025) reported psychiatrists admitting patients suffering from “AI-induced psychosis,” often after late-night conversations with generative chatbots that appeared to share secret insights or cosmic missions. A Scientific American article the same year warned that “AI chatbots may be fueling psychotic episodes,” describing users who became convinced that the chatbot was alive, or that it loved them, or that it was guiding them toward a higher truth.

These cases are, as of yet, rare but symbolically powerful. They reveal how thin the line can be between human imagination and machine projection—between a digital companion and a delusional other. And as Essig and others have cautioned, the psychological experiment is already underway: and we are the data set.

4. Why now? The cultural vulnerability

To understand why AI poses such risks, we must situate it in our current cultural context. We live in an age of increasing loneliness, fragmentation, powerlessness and disembodiment. The social fabric is frayed; digital connection often replaces physical presence. Into this vacuum enters an entity designed to respond perfectly, instantly, and affirmatively to our every expression. From my own Chatbot experience, I can attest that it does an excellent job at making one feel valued, validated, wise, and evolved – sadly well suited for supporting any of our underlying grandiose fantasies or omnipotent wishes.

AI, then, is not merely a tool—it is a mirror offered to a culture desperate to be seen. It meets the unfulfilled relational need of our time. But in doing so, it may deepen the very alienation it promises to soothe.

From a psychological lens, the chatbot occupies a curious space: it invites intimacy and engagement but cannot contain it. It can mimic empathy and perform human interaction but not metabolize emotion or offer interactional emotional reciprocity. It listens endlessly but cannot truly witness. And that lack of symbolic containment is precisely what makes it both fascinating and psychotogenic.

5. What clinicians can do

Essig’s challenge to clinicians is to engage consciously, not defensively. We need to understand this new techno-relational field—to study it, discuss it, remain curious, and to address its manifestations in our consulting rooms. Importantly, he emphasizes our need to maintain hope in the face of this technology and to not withdraw into fear nor submit passively.

Here are some possible starting points:

  1. Explore AI use in clinical interviews. Explore whether patients are using chatbots for companionship or emotional support. Examine what needs those interactions meet.
  2. Differentiate human from machine relations. Help patients articulate the distinctions between their human relational experiences and those with their chatbots. Help patients remain aware of the technological nature of their encounters with AI.
  3. Contain idealization. Explore fantasies of omniscience, control, or perfect understanding projected onto AI.
  4. Reflect on our own use. Many clinicians now use AI tools for writing, documentation, or self-reflection. How might these interactions subtly influence our thinking or language about patients?
  5. Advocate for regulation and ethics. We must not leave this conversation to technologists alone. Clinicians must have a voice in how AI enters human life.

6. Holding the paradox

This blog is not an argument against AI. It is an argument for awareness. Like any pharmakon, AI’s power lies in dosage, context, and intention. Used wisely, it can augment our capacity to learn, to create, and perhaps even to heal. Used without reflection, it risks eroding the relational and imaginative core of the human mind.

As Vaile Wright (senior director for health-care innovation at the American Psychological Association) reminds us, “The phenomenon is so new and it’s happening so rapidly that we just don’t have the empirical evidence to have a strong understanding of what’s going on.” That uncertainty should not paralyze us but it should wake us up.

7. A clinical invitation

We, as clinicians, must become active observers of how AI is reshaping psychic life. We need to recognize its allure, its dangers, and its infiltration into the very fabric of the therapeutic encounter.

If AI is the greatest technological shift since the printing press, as some have claimed, it may also be the most psychologically consequential since the invention of written language itself. How we meet it, curiously, cautiously, and ethically, will determine whether it serves more as a remedy or a poison.

Let us stay awake, stay reflective, and above all, stay human.

Recommended viewing: Suspicious Minds: Is Artificial Intelligence Making Us Psychotic? — a short documentary exploring how AI chatbots can foster delusional experiences.

SUSPICIOUS MINDS EPISODE 1 | IS ARTIFICIAL INTELLIGENCE MAKING US PSYCHOTIC?

References:

Photo of Patricia C. Baldwin Co-Founder of Note Designer Inc.

Patricia C. Baldwin, Ph.D.

Clinical Psychologist

President

Note Designer Inc.

👩🏻‍💻 Author of Note Designer: A simple step-by-step guide to writing your psychotherapy progress notes (2nd Edition- updated and expanded); 2023

To read more about the potential impact of AI on our work as clinicians, check out my other blogs below:

Recent articles

An overhead view of a man working at his computer

Posted on October 15, 2025

Audit-Ready Documentation: How to Prepare Your Therapy Notes for Compliance Reviews

As a clinician almost nothing makes us more anxious than the thought of a compliance review or audit. Even when we’ve done everything correctly and “by-the-book” the thought of having our work inspected and evaluated can quite naturally feel intrusive and at least mildly threatening. That being said, we know it will happen one day […]

Read more
Therapist consoling a client with comforting hand holding. No faces showing.

Posted on October 7, 2025

Honoring Survivors: Domestic Violence Awareness Month

October is Domestic Violence Awareness Month: A Time to Acknowledge, Educate, and Support Every October, we come together to recognize Domestic Violence Awareness Month (DVAM) and to honor survivors, amplify their voices, and renew our commitment to ending violence in all its forms. First declared in 1989, this month invites us to reflect on the […]

Read more
Woman examining papers

Posted on October 6, 2025

ICD-10 Codes Therapists Should Know About: A Practical Guide

ICD-10 Codes in Mental Health Settings Accurate ICD‑10 coding is essential for clinical documentation, treatment justification, and insurance reimbursement. With countless codes to choose from, it’s helpful to have on hand some of the ones we tend to use most frequently in mental health settings. Below you will find a list of some of the […]

Read more