Reflections on Using Artificial Intelligence in Therapy Documentation: Benefits, concerns, and solutions

Unless one has been living on a deserted island with no internet connection or cell towers (which certainly has its appeal), most of us have heard of Chat GPT – an artificial intelligence (AI) technology that can generate human-like text based on the input it receives. The GPT stands for Generative Pre-trained Transformer (GPT) and the Chat part refers to how it can carry on a “conversation” with a human user by providing responses that are contextually relevant to an ongoing “dialogue”. The GPT model can be used in a variety of applications, from answering academic questions and writing letters or essays to more specific use cases like tutoring, providing technical support, and even writing computer code. The creators of Chat GPT caution us however that while it can generate responses that seem knowledgeable and coherent (and sound eerily human), its responses are generated based on patterns and information in the data it was trained on, and it doesn’t have consciousness, beliefs, or access to real-time information (at least not yet!).

One of the first things I experimented with when I first encountered Chat GPT was to see how it would compose a psychotherapy progress note. I of course entered fictitious information (well, and maybe a thing or two about myself) and asked it to write a professional sounding psychotherapy SOAP note. I pressed “enter” and in a matter of a few seconds out popped a miraculously detailed and professional sounding note. After getting over my initial amazement and my amusement at some of the liberties Chat GPT took to make the note sound thorough and complete, I began to reflect on the how this technology will impact psychotherapy documentation as well as the fields of clinical psychology, counselling and mental health-care more generally. In today’s post, I’m going to specifically focus on the use of AI in psychotherapy documentation. Exploring how AI may impact our clinical work more broadly is an interesting question but one I’ll leave for another post. Let’s turn now to what I consider to be some of the benefits, concerns, and solutions for using AI to assist with our documentation.

Potential Benefits:

 Without a doubt, the benefits of using AI in therapy documentation seem very compelling, appealing and readily apparent. These include:

Efficient – reduces the amount of time to complete a progress note and reduces burden.

Professional Appearance – the text is generally very well written and professional sounding.

Consistency – can produce a standardized format for notes, ensuring all necessary information is consistently documented (though you do typically have to ask it to format your note to suit your practice needs e.g., SOAP, BIRP, DAP etc.).

Error Reduction – human errors in grammar and spelling are overcome.

Instructive –AI generated notes are typically well-organized and structured so that they readily provide good examples of and templates for how to write progress notes and treatment plans.


Despite the many potential benefits, there are also some very significant concerns about using AI for our documentation. Indeed, there are currently no regulations governing AI technologies (such as Chat GPT) more generally and many experts have voiced their anxieties about what this implies and where this may lead in the future (noteworthy are the concerns raised by one of the leading developers of AI technology itself, Geoffery Hinton, 2023).

Security & Confidentiality – Despite reassurances that our inputs and outputs while using an AI chat are destroyed, there is no way to verify or guarantee that our content is actually being deleted and not retained in some form.  Because there are no regulations governing how the data are being used, collected or destroyed it cannot meet the level of privacy standards and compliance necessary for psychotherapy documentation.

As well, given that the data are transmitted across the internet, even if just for a short time, this also leaves open the possibility that the information might be hacked or otherwise intercepted. Transmitting your client’s PHI across the internet to a large business entity with no accountability to uphold the ethical standards of the profession raises challenging ethical questions and concerns.

Authenticity & Accuracy – Though progress notes and reports generated by GPT give the appearance of professionalism and even theoretical knowhow, they can readily include artificial content that sounds good but is completely false, inauthentic or otherwise fabricated. In computer circles this phenomenon is known as AI chat “hallucinations”. In the life of a busy clinician, it would be far too easy to overlook such inaccuracies and have them end up in the official clinical record. That, obviously, would not be good on many levels (ethically and otherwise), for both you the clinician and the client.

Conceptual Degradation – Reliance on GPT models to create progress notes, treatment plans and reports has the potential for limiting the conceptualizing ability and clinical thinking of the therapist.  For instance, this reliance may interfere with the clinician’s elaboration and processing of the clinical material which, in turn, may have an impact on the services they deliver. This may also be particularly relevant to junior clinicians who would otherwise benefit from greater time spent thinking about and working with their clinical material as part of their learning and ongoing professional development.


 With all of these considerations in mind, is it possible for clinicians to benefit from using AI technology to help support their documentation while also safeguarding against the many problematic concerns we have just considered? I do believe it is possible but that this requires dedicated attentiveness to the types of information we input, what we request, and how we use the information generated. Here are some specific precautions and solutions:

Protect all PHI – Private health information or any information that can identify a client (directly or indirectly) should not be entered into the AI chat input. Obviously, this includes such things as the client’s name and address but also includes details of the client’s life or circumstance that have the potential for identifying the client. For instance, details about a client’s cultural background, sexual orientation, or occupation along with specific details of a recently recounted family experience has the potential for revealing the client’s identity. If enough specific information is entered that could lead a third party to identify the client or for the client to identify themselves, you have not fully protected the confidentiality of the clinical material.

Enter Generic Content – To safeguard against revealing PHI (both directly and indirectly), clinicians should only enter generic content and general statements about the psychotherapy session, their interventions, outcomes, assessments and progress. By limiting inputs to non-identifiable content, you can remain confident that you are compliant with regulations regarding confidentiality, privacy, and the security of electronic health information.

Careful Editing – Given the tendency for AI chat programs to take considerable liberties with embellishing, fabricating, or otherwise making up content to produce a great sounding note, it is crucial to always carefully read and edit the output provided. The AI program cannot be held accountable when we mistakenly enter false information into the clinical record (no matter how professional the note sounds) – it is the clinician and the client who will suffer the consequences.

Client Consent – If you have any concern at all that non-generic or identifying information will be transmitted through your AI chat technology, something to seriously consider would be to inform your clients that you are doing this and to ask for their consent.  This would involve explaining a bit about the technology you are using, what this means, and the steps you are taking to protect their private health information. You might include this in your existing treatment consent form or create a dedicated consent form for the use of AI assisted documentation.

No doubt, AI isn’t going away any time soon and its applications are being considered in almost every field including medicine, computer programming, graphic design, science and teaching (to name a few).  Psychotherapists are already using AI in different aspects of their practice including for documenting their clinical work. After reflecting on the many benefits and potential risks of using this new technology to help with documentation, I offer the suggestion that we approach this with caution, attentiveness, and diligence so as to maintain our professional integrity, ethical standards, and respect for the privacy of our work and the confidentiality of client information.

Note Designer’s AI Progress Notes:

Following from these reflections and explorations, we have recently created an AI progress note-writer that harnesses the powers of AI technology while also safeguarding against important concerns over privacy and accuracy of the clinical record. Our AI Note has the user enter all PHI in a separate field that is never transmitted or otherwise sent to the AI system. This ensures that all direct PHI remains secure and protected. As well, the only information sent to the AI system is the generic content the user has selected from our Note Designer statements and phrases. In this way, the actual information that is transmitted does not qualify as PHI as it in no way can be traced back to a particular client and reveals nothing about their identity (directly or indirectly).  Another safeguard for privacy is that even all of the generic information is only being sent from Note Designer’s secure server system and not from the user’s personal identifying IP address – this means the content cannot even be traced back to the individual user.  With respect to accuracy, our program prompts the AI system to only use the information we input and have put measures in place to reduce the risk of AI embellishments.  We do still of course caution users to check their Note Designer AI generated note for accuracy (something we should probably do whenever reviewing our notes no matter how they are created).

We will be launching our AI note-writer in late Fall, 2023 – free for all our subscription users.  As this is all very new for everyone, we encourage anyone who tries it to let us know their thoughts, impressions, feedback and suggestions.  At Note Designer, we take all customer feedback very seriously and try our best to modify and update our program accordingly.

Thank you to all our Note Designer users.

Happy Note Writing!

Patricia – at Note Designer –

Start typing and press Enter to search