Study Reveals Chatbots Viewed as More Judgmental Than Human Therapists

Sat 24th May, 2025

A recent study from Temple University has revealed that individuals perceive chatbots as more judgmental than their human mental health provider counterparts. This finding challenges the assumption that people would feel more comfortable disclosing personal information to artificial intelligence (AI) due to a perceived lack of judgment.

Sezgin Ayabakan, an associate professor in the Management Information Systems Department at the Fox School of Business, led the research, which aimed to explore how AI could help increase access to mental health resources. The team initiated their study under the premise that the stigma surrounding mental health may deter individuals from seeking help from traditional providers. They hypothesized that people would be more open to speaking with a robot, believing that it would be less judgmental.

However, the research team conducted multiple experiments and discovered an unexpected outcome. Participants indicated that they perceived AI agents as being more judgmental than human agents, despite both types of agents displaying identical behaviors during interactions.

The study involved a series of four experiments with a total of 290 to 1,105 participants. In these experiments, participants watched videos depicting conversations between a patient and either a chatbot or a human therapist. The only difference presented to participants was the nature of the agent--whether it was human or AI.

Ayabakan emphasized the significance of using vignette studies, which allow researchers to control various factors while altering just one variable. This methodology enables a clearer understanding of how perceptions shift with different agent types.

To further investigate the reasons behind the perception of chatbots as more judgmental, the researchers conducted qualitative interviews with 41 individuals. The findings indicated that many participants believed chatbots lacked the emotional depth and understanding that human therapists possess. Participants felt that AI agents could not fully grasp complex human emotions, leading to feelings of judgment.

According to Ayabakan, interviewees expressed concerns about chatbots' inability to exhibit empathy, compassion, and validation, crucial elements in mental health care. These limitations led individuals to feel that chatbots could not provide the human connection necessary in therapeutic contexts.

Interestingly, Ayabakan noted that participants tended to focus on the limitations of chatbots rather than their capabilities. In contrast, judgments of human agents were often based on their actions and ability to connect with patients.

This study draws attention to the complexities of integrating AI in mental health care. As technology continues to evolve, understanding user perceptions and the emotional undertones of communication remains essential for developing effective AI tools that can genuinely assist individuals seeking mental health support.


More Quick Read Articles »