“I thougth what I ‘d do was, I’d pretend I was one of those deaf-mutes,or should I ?我曾经想过变成一个堵上眼睛和耳朵，缄口不言的人，我应该这样做吗?”
Chatbots are a type of communicative artificial intelligence, i.e. automated systems that can perform communication tasks with some degree of human intelligence. Increasingly, chatbots are being used in mental health services. This trend has led to a need for a deeper understanding of the emotional support chatbots provide to people: is the emotional support of chatbots effective in reducing people’s stress and worry? When will their effectiveness be altered? This paper focuses on how relational communication interacts with emotional support to influence people’s mental health and how sources of emotional support affect its effectiveness, the moderating role of reciprocal self-disclosure in reducing stress and worry, and advances the application of the CASA framework to empathic chatbots.
关于压力：情感支持Talking about stress: Emotional support
The provision of emotional support helps humans address the basic need to be cared for and supported by others by conveying emotions such as empathy and encouragement to people who are undergoing stress or worry. When a person is expressing their stress or concern, we should be supportive rather than accusatory. Access to effective sources of emotional support can help the person disclosing to be less stressed and worried. These ideas have been proven in face-to-face as well as computer-mediated interactions. According to the CASA framework, people will react the same way when confronted with a computer as they would naturally react when confronted with a human. Based on these ideas, this paper makes the following assumptions：
H1:A discloser reduces more (a)stress and (b) worry when a conversational partner provides emotional support than when the partner does not.
H2:The positive effect of emotional support on (a)stress and (b)worry reduction is mediated through perceived supportiveness of the partner.
Chatbot versus human as a conversational partner
Although the CASA framework assumes that people will respond to computers as they would to humans, existing research shows that people react differently to computers and humans in different contexts. The stereotypes that humans inherently have about machines are applied to interactions and have an impact on their outcome. As disclosers need genuine concern from their conversational partners rather than programmed, inauthentic emotional support, chatbots are less likely to be truly perceived as a source of emotional support by people. In addition to this, human partners tend to have the ability to empathise and to give genuine understanding to the discloser. Therefore, this paper makes the following assumptions：
H3:Disclosers perceive higher supportiveness from a human partner than a chariot who provides emotional support,which further leads to(a) stress and (b)worry reduction：
边缘条件：对等的自我披露Boundary condition: Reciprocal self-disclosure
Reciprocal self-disclosure is an act. When the personal information disclosed by the dialogue partner is as intimate as that disclosed by the discloser, the fairness of the relationship is maintained and the discloser will trust the dialogue partner more, but the personal relationship also affects the way people profit from emotional support. According to the CASA framework, reciprocal self-disclosure as a social cue makes interpersonal social behaviour more readily applicable. As with interpersonal communication, people may be more likely to trust which self-disclosing robots. Based on the above arguments, this paper proposes that：
H4:A conversational partner’s reciprocal self-disclosure will magnify the positive effect of the partner’s emotional support on reducing a discloser’s(a) stress and (b)worry.
The study found that relational communication is more natural when the conversation partner is a human than when the conversation partner is a computer. This suggests that chatbots have a greater need to actively engage in relational communication through reciprocal self-disclosure as a way to compensate for the stereotypical image of machines. According to the CASA framework, chatbots are more dependent on the presence of social cues like reciprocal self-disclosure. The following hypotheses are introduced：
H5:The magnifying effect of reciprocal self-disclosure on the relationship between emotional support and reduction in (a)stress and (b)worry should be stronger when the source is a chat bot than a human.
The study in this paper adopts a 2 x 2 x 2 (chatbot and human x whether or not to engage in emotional support x whether or not to engage in reciprocal self-disclosure) between-subjects factual design where participants are randomly assigned to one of eight conditions. The findings and statistics show that during the actual interaction, participants’ behaviour is consistent with that expected when humans interact with chatbots, e.g. participants will use more but shorter sentences when communicating with chatbots.
In this paper, neuroticism is used as a covariate in the analysis.
H1:It was hypothesised that emotional support would have a major effect on stress and worry reduction. An analysis of covariance showed that emotional support had no significant effect on stress reduction, but had a positive effect on worry reduction, with participants worrying more when receiving emotional support.
H2:The hypothesis was that there was a mediated relationship between emotional support and stress/worry reduction through perceived support. The findings show that emotional support indirectly has a positive impact on stress reduction through perceived supportiveness and that the overall effect of emotional support on worry reduction is significant and therefore H2 is supported.
H3:The moderating role of source in mediating relationships is presented. Moderated mediation indices suggest that the indirect effect of emotional support on stress and worry reduction varies according to the source of emotional support, and that emotional support from human partners has a stronger effect than emotional support from chatbots.
H4:Reciprocal self-disclosure by the conversational partner would amplify the positive effect of the partner’s emotional support on reducing the stress and worry of the discloser. No significant interaction between reciprocal disclosure and emotional support was found for the reductions in stress; a significant interaction between the two was found for the reductions in worry.
H5:It was hypothesised that reciprocal self-disclosure should have a stronger amplifying effect on the relationship between emotional support and reduction of stress (a) and worry (b) when the source was a bot than when it was a human. The results showed that when the chatbot did not provide emotional support, engaging in reciprocal self-disclosure instead reduced stress more than when emotional support was provided. (H5a) On the other hand, for worry reduction, there was no significant three-way interaction (H5b).
J Meng, & Dai, Y. N. . (2021). Emotional support from ai chatbots: should a supportive partner self-disclose or not?. Journal of Computer-Mediated Communication(4), 4.