What Therapists Are Really Saying About AI in Practice-And Why Ethics May Matter More Than Innovation
Artificial intelligence is appearing more often in health care conversations, yet in everyday clinical practice many therapists remain cautious (and I believe with very good reason).
A recent report from the American Psychological Association helps explain why.
According to findings from the 2024 Practitioner Pulse Survey, only 29% of psychologists reported using AI tools in their practice within the previous year. Even fewer, just 11%, used them on a monthly basis or more.¹
This suggests something important.
AI may be advancing quickly, but adoption by practitioners continues to be shaped by ethical reflection, professional identity, and trust.
That is where the real conversation belongs.
Where Practitioners See Value in AI
Among clinicians who do use AI tools, the most common applications are practical rather than clinical:
- Writing assistance
- Content generation
- Article summarization
In other words, practitioners are using AI primarily to manage workload and information demands, not to replace clinical thinking.
When asked about benefits, respondents pointed to:
- Improved operational efficiency
- Easier access to scientific literature
- Better patient education tools
These gains matter, especially in a profession already strained by documentation demands and time pressure.
For many clinicians, this raises an important question about how AI tools differ in their approach to documentation and privacy. We explore this further in our post on how Note Designer differs from other AI note programs.
Why Concerns Still Outweigh Benefits
What stands out even more clearly in the APA data is that more practitioners identified concerns than benefits.
Top worries included:
- Risk of breaches involving sensitive data
- Unanticipated social harm
- Biased input producing biased output
More than half of respondents said they were not aware of any benefits of using AI in practice, while only a minority said they were unaware of concerns.
That imbalance is telling.
It shows that hesitation about AI in mental health care is not resistance to change; it reflects ethical caution grounded in professional responsibility.
These concerns are echoed in the APA’s recent ethical guidance on AI use in clinical settings, which we summarize in APA’s new ethical guidance for using AI in clinical practice.
The Ethical Tension at the Heart of AI in Therapy
Psychology, counselling, and psychotherapy are different from many other fields experimenting with artificial intelligence.
Clinicians work with highly sensitive personal information, vulnerable life experiences, and legal and ethical standards that place confidentiality at the center of practice.
When practitioners hesitate, it is not because they lack openness to innovation; it is because they understand the stakes.
The APA report makes this clear. Adoption of AI tools in behavioral health will depend less on novelty and more on ethical rigor, especially when it comes to protecting privacy and confidentiality.¹
That perspective aligns closely with how we think about technology at Note Designer, and with the principles behind our ethical approach to AI.
A Different Model of AI in Documentation
We often hear clinicians say something simple and very telling:
“I do not want AI to listen to my sessions. I just want help writing better notes.”
That distinction matters.
Our approach to AI has never been about recording therapy sessions, storing clinical conversations, or extracting data from patient interactions. It is about supporting clinicians in turning their own clinical judgment into clear, ethical, and well-structured documentation.
Used this way, AI becomes:
- A writing assistant
- A clarity tool
- A support for organization, not a clinical authority
Most importantly, it stays under the clinician’s control.
Why This Moment Matters
The APA data suggests that the profession is at a turning point.
Practitioners are not rejecting AI outright. They are signaling that:
- Innovation without ethics will not earn trust
- Convenience without confidentiality will not gain adoption
- Efficiency without standards will not serve the profession
The future of AI in mental health practice will not belong to the most aggressive technology. It will belong to the tools that respect clinical judgment, protect privacy, and reduce burden without compromising values.
That is the future we are committed to building.
Final Thought
AI does not need to replace the human core of clinical practice to be useful. It only needs to support the work clinicians already do, responsibly and transparently.
If the APA data teaches us anything, it is this: practitioners are not asking whether AI can do more. They are asking whether it can do better – at Note Designer we take this very seriously and are committed to ensuring that clinicians have access to AI tools that respect the ethical standards and integrity of the profession.
Reference
¹ American Psychological Association. (2024). Barriers to care in a changing practice environment: 2024 Practitioner Pulse Survey. APA Center for Workforce Studies.
https://www.apa.org/pubs/reports/practitioner/2024

Patricia C. Baldwin, Ph.D.
Clinical Psychologist
President of Note Designer Inc.