AI and Mental Health: The Ethics of Using Chatbots as a Therapist

It’s 11pm. You can’t sleep. You’re feeling overwhelmed, anxious, and your thoughts are spiralling. Instead of texting a friend or scrolling endlessly, you open a mental health app. Within seconds, a chatbot greets you: “Hi there, how can I help you tonight?” It asks you to rate your mood, suggests some breathing exercises, and offers supportive phrases like, “You’re doing your best.” It feels reassuring. Safe. Like someone’s really there. But is it therapy?

The rise of artificial intelligence (AI) in mental health support has opened up a world of possibilities. AI-powered chatbots now offer around-the-clock guidance, track your moods, and even deliver structured interventions like cognitive behavioural therapy (CBT) exercises. For many, especially those with limited access to care, it can feel like a lifeline. But as this technology becomes more sophisticated and widely used, it also raises a serious question: should we treat a chatbot like a therapist?

First, let’s be clear: chatbots aren’t human. No matter how realistic or compassionate their responses may seem, they don’t understand emotion the way a person does. They don't form relationships. They don’t sit with you in silence when there’s nothing to say which can be an incredibly powerful way of feeling connected. What they offer is programmed, pre-written, and designed to simulate empathy—but it’s not empathy in its true sense. That distinction matters, especially in moments of real vulnerability. So what is empathy?

Empathy is the ability to understand and share the feelings of another person. Psychologists generally describe empathy as having three main forms:

  1. Cognitive empathy is the ability to understand someone else's thoughts, perspectives, or mental state. It’s the kind of empathy that helps you say, “I get why they’re feeling that way,” even if you don’t feel it yourself. This form is often linked to effective communication and perspective-taking, especially in professional or clinical settings. This is the form of empathy we see mirrored in chatbots.

  2. Emotional (or affective) empathy is when you actually feel what another person is feeling. If someone is sad, you might feel a pang of sadness yourself. This form creates emotional resonance and is often what drives compassion and connection. This is the form of empathy we feel when sitting with people around us while in a vulnerable state, something a chatbot cannot replicate.

  3. Compassionate empathy (also called empathic concern) goes one step further—it not only involves understanding and feeling another’s experience, but also includes a desire to help. It’s empathy in action. This is particularly important in caregiving and therapeutic roles, where emotional understanding is paired with support.

One of the biggest ethical concerns is around boundaries and expectations when it comes to chatbots. If someone begins to rely on an AI chatbot as their main form of emotional support, they may start to believe they’re receiving therapy when they’re not. The chatbot may offer helpful tools and a sense of comfort, but it can’t provide clinical judgment, crisis support, or the complex, dynamic relationship that underpins real therapeutic change such as those we experience when speaking with professionals. That kind of misunderstanding can be risky—especially if someone is experiencing suicidal thoughts, trauma, or a serious mental health condition.

And then there’s the question of equity. On one hand, chatbots might increase access to support for people who can't afford therapy or live in areas where services are hard to come by. On the other hand, we have to ask—are we creating a two-tiered system, where some people get a therapist and others get an app? While AI tools can supplement care, they shouldn’t become a replacement for it just because it’s cheaper or more convenient.

That said, AI chatbots aren’t inherently bad. In fact, they can be a useful tool for people looking to track their moods, practise skills, or find brief support between therapy sessions. For some, they offer a low-pressure, judgement-free way to explore how they’re feeling—particularly for those who find it hard to open up in person. Used thoughtfully, AI can absolutely play a role in improving mental health access and literacy. It can also be a useful tool to help explore concepts that you then take to your therapist to explore in more detail.

The key is in how we frame it. AI is not therapy. Chatbots are not therapists. But they can be one part of a broader mental health support system—like a digital stepping stone toward help, or a helpful companion alongside more formal care.

As this technology continues to evolve, we need open conversations about its benefits and its limits. We need clearer ethical guidelines, better transparency about data use, and a stronger commitment to ensuring that digital tools don’t replace real human connection—but rather support it.

Because at the end of the day, while a chatbot might tell you, “You’re not alone,” it’s the presence of another human being—someone who listens, understands, and holds space—that truly reminds you of that truth.

Our highly trained psychologists can help. Please call our team on 9882-8874 to book in with one of our team members today. Alternatively fill in our contact form here to get in touch. 

To subscribe and listen to our podcast “Breaking the Rules: A Clinician’s Guide to Treating OCD”, click on the following links: Spotify, Google Podcasts, and Apple Podcasts. Episodes will be released fortnightly and will simultaneously be published on our webpage here.