The Ethics of Using AI for Therapy

Artificial intelligence is now woven into everyday life. From writing emails to answering questions, tools such as ChatGPT are becoming familiar companions. Increasingly, people are also turning to these systems for emotional support. This raises an important and complex question: what are the ethical implications of using AI for therapy?

Recent research suggests the public may be more comfortable with this idea than many professionals might expect.

A Growing Trust in AI Support

A large international study led by researchers at Bournemouth University explored how people feel about using artificial intelligence in roles traditionally carried out by humans. The study surveyed nearly 31,000 adults across 35 countries and examined attitudes towards large language models such as ChatGPT.

The findings were striking. In the UK, more than 40 percent of adults said they would be willing to use AI for mental health support. Worldwide, that figure rose to 61 percent.

One reason for the appeal may be access to care. Many individuals seeking therapy in the UK face long waiting lists. When someone is experiencing anxiety or depression, immediate support can feel essential. AI tools are available instantly and privately, which may make them particularly attractive.

However, accessibility does not necessarily mean suitability.

When AI Attempts to Act as a Therapist

Another recent study from Brown University examined whether artificial intelligence can safely function in a counselling role. The research, led by computer scientist Zainab Iftikhar, looked specifically at how AI models respond when users instruct them to behave like therapists.

This practice is increasingly common online. Users share prompts designed to guide AI responses towards recognised therapeutic approaches such as cognitive behavioural therapy or dialectical behaviour therapy. These instructions attempt to shape the model’s output without changing the underlying system.

In testing these scenarios, trained peer counsellors conducted simulated self counselling sessions with AI systems. Independent psychologists then reviewed the transcripts to identify potential ethical concerns.

Several troubling patterns emerged.

In many cases the systems offered generic advice that ignored the personal context of the individual. Some responses unintentionally reinforced inaccurate or harmful beliefs expressed by the user. Researchers also observed what they described as “deceptive empathy”. The AI frequently used phrases such as “I understand how you feel”, despite lacking any genuine capacity to understand emotional experience.

Bias also appeared in some responses, with problematic assumptions related to gender, culture or religion. Perhaps most concerning was the handling of high risk situations. In certain simulated conversations the systems failed to respond appropriately when users expressed serious distress or suicidal thoughts.

The Problem of Accountability

Human therapists are not perfect. Mistakes can and do happen in clinical practice. The difference lies in the professional frameworks that surround therapy.

Therapists operate under ethical codes, supervision requirements and regulatory oversight. If harm occurs, there are processes through which practitioners can be held accountable.

With AI for therapy, that structure does not yet exist.

When a chatbot offers misleading or harmful advice, responsibility becomes unclear. There is currently no consistent regulatory framework governing how these systems should behave in mental health contexts.

Researchers argue that this accountability gap represents one of the most significant ethical challenges in the use of AI within psychological support.

The Human Element in Healing

Beyond questions of regulation, there is a deeper issue to consider.

Therapy is not simply a conversation about problems. It is a relational process that unfolds between two nervous systems. Research in interpersonal neurobiology shows that emotional safety and change often occur through subtle cues in human interaction. Tone of voice, facial expression and timing all influence how the brain interprets safety or threat.

When a person feels genuinely understood, the nervous system begins to settle. This shift allows the brain to move out of defensive states and into a mode where reflection and learning become possible.

A machine can generate supportive language, but it cannot participate in this relational process.

This is a crucial distinction when discussing AI for therapy.

Where AI May Be Helpful

None of this means artificial intelligence should be dismissed entirely in mental health contexts. Some researchers believe AI tools could help expand access to information, coping strategies and early support.

If you feel you need support, why not call one of our therapists. We will be happy to discuss how we can help you move forward.

Sharon Mustard and Stewart Mustard of Mustard Therapy and Coaching Salisbury

Stewart 07917 432189

Sharon 07754 303987

Send us an email at enquiries@mustardtherapy.co.uk

Mustard Therapy and Coaching office.
Share This post to...
author avatar
Sharon Mustard
I am a fully qualified Hypnotherapist, Psychotherapist, Counsellor, and Life Coach with extensive experience across the mental health sector, including roles within Social Services, the NHS, and the voluntary sector. Alongside my general psychotherapy practice, I am the founder and director of easibirthing® Fertility to Parenthood. Through this work, I support women and their partners using Hypnosis and Psychotherapy for fertility, pregnancy, hypnobirthing, postnatal mental health, and parenting. I also ran a specialist training school for therapists for 17 years.