Original Paper
Abstract
Background: Chatbots are increasingly used to support COVID-19 vaccination programs. Their persuasiveness may depend on the conversation-related context.
Objective: This study aims to investigate the moderating role of the conversation quality and chatbot expertise cues in the effects of expressing empathy/autonomy support using COVID-19 vaccination chatbots.
Methods: This experiment with 196 Dutch-speaking adults living in Belgium, who engaged in a conversation with a chatbot providing vaccination information, used a 2 (empathy/autonomy support expression: present vs absent) × 2 (chatbot expertise cues: expert endorser vs layperson endorser) between-subject design. Chatbot conversation quality was assessed through actual conversation logs. Perceived user autonomy (PUA), chatbot patronage intention (CPI), and vaccination intention shift (VIS) were measured after the conversation, coded from 1 to 5 (PUA, CPI) and from –5 to 5 (VIS).
Results: There was a negative interaction effect of chatbot empathy/autonomy support expression and conversation fallback (CF; the percentage of chatbot answers “I do not understand” in a conversation) on PUA (PROCESS macro, model 1, B=–3.358, SE 1.235, t186=2.718, P=.007). Specifically, empathy/autonomy support expression had a more negative effect on PUA when the CF was higher (conditional effect of empathy/autonomy support expression at the CF level of +1SD: B=–.405, SE 0.158, t186=2.564, P=.011; conditional effects nonsignificant for the mean level: B=–0.103, SE 0.113, t186=0.914, P=.36; conditional effects nonsignificant for the –1SD level: B=0.031, SE=0.123, t186=0.252, P=.80). Moreover, an indirect effect of empathy/autonomy support expression on CPI via PUA was more negative when CF was higher (PROCESS macro, model 7, 5000 bootstrap samples, moderated mediation index=–3.676, BootSE 1.614, 95% CI –6.697 to –0.102; conditional indirect effect at the CF level of +1SD: B=–0.443, BootSE 0.202, 95% CI –0.809 to –0.005; conditional indirect effects nonsignificant for the mean level: B=–0.113, BootSE 0.124, 95% CI –0.346 to 0.137; conditional indirect effects nonsignificant for the –1SD level: B=0.034, BootSE 0.132, 95% CI –0.224 to 0.305). Indirect effects of empathy/autonomy support expression on VIS via PUA were marginally more negative when CF was higher. No effects of chatbot expertise cues were found.
Conclusions: The findings suggest that expressing empathy/autonomy support using a chatbot may harm its evaluation and persuasiveness when the chatbot fails to answer its users’ questions. The paper adds to the literature on vaccination chatbots by exploring the conditional effects of chatbot empathy/autonomy support expression. The results will guide policy makers and chatbot developers dealing with vaccination promotion in designing the way chatbots express their empathy and support for user autonomy.
doi:10.2196/41148
Keywords
Introduction
Background
Chatbots are used as a promising tool to promote COVID-19 vaccinations [
, ] as they offer the possibility of upscaled interactions with users. Chatbots can also support health communication by engaging users via social media channels [ ]. Unfortunately, there is a considerable amount of resistance toward COVID-19 vaccinations in society [ , ], and chatbots themselves may evoke negative user responses (eg, [ ]). Specifically, the effects of chatbot empathy have been found to be ambiguous. On the one hand, empathy displayed by a chatbot (ie, a chatbot showing its understanding of and care for the user’s experience) may lead to a positive user response [ ]. It may also enhance autonomy support (ie, provide the user with a sense of freedom in acting upon the information received from the chatbot), which is considered to diminish psychological reactance [ , ]. On the other hand, previous studies [ , , ] suggest that the consequences of artificial agents’ empathy may also be negative. These studies focus on the way of expressing empathy, suggesting that being too “humanlike” is harmful. However, the role of the context related to the conversation, as conversation quality and chatbot expertise cues, remains understudied. Since that conversation-related context is manageable, this gap is also important from the practical perspective of vaccination chatbots. Therefore, this study aims to bridge this gap by investigating how conversation quality and chatbot expertise cues moderate the effect of expressing empathy/autonomy support using a COVID-19 vaccination chatbot. We propose that the effect of expressing empathy/autonomy support is less positive (or more negative) when the actual answers provided by the chatbots fail to dissolve queries submitted by users, because users may have higher expectations for the quality of interaction if a chatbot is interacting in a more human way [ , ]. Given the technical complexity of chatbot development, especially in the context of topics that require a lot of domain-specific and rapidly changing knowledge, such as the COVID-19 vaccines, conversation fallback (CF, ie, the situation in which a chatbot is unable to answer a user’s question) is plausible [ , ]. In addition, users may react more positively to the chatbot empathy/autonomy support expression when they perceive the chatbot to be more trustworthy and competent [ ], which is relevant to a controversial topic, such as vaccinations. Therefore, we considered an expertise cue in the form of the chatbot endorser's credibility as an additional moderator of the empathy/autonomy support expression effect. We examined these moderating effects in an experimental study with different levels of expression of empathy/autonomy support and endorser credibility for a COVID-19 vaccination chatbot (VaccinChat), considering the actual level of conversation quality.According to the Behavioral Intervention Technology Model [
], our chatbot (VaccinChat) can be considered a behavioral intervention technology whose clinical aim is to increase users’ intention to get vaccinated against COVID-19. The chatbot patronage intention (CPI, including the willingness to use and recommend the chatbot) is considered the usage aim. The behavioral intervention strategy is education (ie, providing information about how vaccines work and how a user can get vaccinated). Accordingly, the chatbot endorser credibility (expert vs layperson) can be considered a relevant expertise cue contributing to the user's initial trust toward the chatbot as an educational tool. Behavioral intervention elements are the pieces of vaccine-related information provided by the chatbot. They can be characterized as having a textual form, being instantly delivered online, and being personalized based on user queries formulated in natural language. The expression of empathy/autonomy support is an additional characteristic that is experimentally manipulated and varies based on specific questions asked by a user. The intervention workflow is user defined (ie, a user may decide whether and how frequently to reuse the chatbot). However, our experimental setting included a single intervention (ie, the chatbot was used only once by each study participant).Theoretical Development
Supporting Vaccinations and the Role of Chatbots
Even amid the COVID-19 pandemic threat, vaccine hesitancy, fueled by misinformation and conspiracy theories, remains an issue in health promotion efforts [
]. At the end of 2021 (about 1 year after making COVID-19 vaccines available), less than 69% of European Union (EU) citizens were vaccinated against COVID-19 (primary course), with even much lower values in some countries such as Bulgaria (28%) [ ]. The existing literature highlights numerous factors of vaccination intent, such as perceived risks, benefits, and costs of vaccination [ - ]; social and media communication on vaccines [ - ]; and advertising [ , ]. Tough measures to increase the uptake level based on coercion or tangible incentives are problematic. Specifically, imposing mandatory COVID-19 vaccinations faces considerable objections (eg, half of the German adults were against mandatory vaccinations in 2020) [ ]. The effectiveness of promoting vaccinations through monetary incentives is shown to be limited [ ]. Therefore, it is crucial to promote vaccinations by informing the public about them [ ] through mass and personal communication, training, and education [ ]. Various new media and technologies have been used to convey vaccine-related messages in a large-scale but personalized way. Vaccine promotion organizations use websites and social media to share information and raise public awareness [ , ].Chatbots can be accessible to many users and enable conversations with them, supporting scaling up and personalizing health communication [
, ]. In the context of addressing user doubts, chatbots can determine which of the most common counterarguments in a given population are endorsed by a given user and provide relevant rebuttals [ ]. It is proposed [ ] that chatbots can combine the knowledge of conversational norms and relational strategies with storing conversation history for a given user to scale up the relational capacity. Although people treat chatbots differently from humans (eg, by using shorter messages and poorer vocabulary [ ]), such conversations may proxy personal communication. This may ensure user engagement [ ] and responsiveness to users' doubts [ , ]. Moreover, people tend to consider machines more objective and less biased than humans (“machine heuristic” [ ]), which may be advantageous for chatbots versus humans in health communication. Chatbots may inform on and change COVID-19–preventive behaviors [ ]. Altay et al [ ] demonstrated that COVID-19 vaccination intention increased after interacting with a chatbot responding to counterarguments. Parents provided with vaccine information by chatbots were more in favor of children's vaccinations [ ]. Amiri and Karahanna [ ] reported over 20 publicly accessible chatbots that provide virus and vaccine education, some of which offer misinformation/disinformation fact checking. However, the quality of information that vaccination chatbots provide is considered a cause for concern [ ].The Pitfall of Expressing Empathy/Autonomy Support Using Chatbots
In the context of vaccination, promotion may be considered as limiting one's freedom. Namely, convincing or persuading someone to vaccinate leads the communication receiver to perform an activity (ie, vaccination) that may be perceived as intrusive and influential on their body and health. Thus, psychological reactance, a motivational state oriented toward reestablishing one's freedom when threatened [
], may play a key role in responding to vaccine communication. Displaying empathy in health communication, that is, showing the recognition of the receivers' experience, may make them feel supported [ ]. Specifically, by taking the receiver’s perspective, health communication becomes less pushy and perceived as more internal, which may lead to a higher perceived autonomy of the receiver [ ]. Therefore, we consider expressing empathy and supporting autonomy (denoted as “expressing empathy/autonomy support”) as an internally coherent property of a chatbot. This property can be manifested in a conversation by various expressions suggesting that a chatbot understands its users, takes their perspective, and stands by their side instead of pushing them. As posited by reactance theory [ ], granting autonomy by enhancing the freedom of choice in communication may diminish the receivers' reactance. Health communication using less controlling language and taking the receivers' perspectives by highlighting that they are free to decide what is best for them was demonstrated to evoke less negative reactions, such as less anger and a lower perceived threat to freedom, respectively [ ].Chatbots may differ in expressing empathy/autonomy support, whose influence on the user response appears complex. The Computers as Social Actors (CASA) approach [
, ] posits that people tend to perceive their interactions with machines (eg, chatbots) as interactions with other people. As empathy may be considered an anthropomorphic dimension of conversational agents [ ], one may expect that expressing empathy/autonomy support using chatbots produces positive user reactions. Accordingly, De Gennaro et al [ ] showed that the effect of experienced social exclusion was less negative when participants interacted with an empathic chatbot instead of an interactive questionnaire. The chatbot’s empathy manifested itself in its responses to the participants mentioning their exclusion in social media. The chatbot expressed understanding, felt sorry, and comforted the participants. However, the Uncanny Valley of Mind (UVM) theory [ ] posits that a machine that is too similar to humans may evoke negative reactions (eg, by perceiving it as eerie). Liu and Sundar [ ] demonstrated both positive and negative effects of chatbot empathy on perceived chatbot eeriness. In their experiment, the participants read a predefined script of the conversation between a user and a chatbot. The experimental conditions differed in the form of empathy expressed by the chatbot, and the chatbot expressed no empathy in the advice-only control condition. Suggesting that the chatbot felt sorry for the users’ problems was negatively related to the perceived eeriness, but suggesting that the chatbot understood users’ feelings about their problems was positively related to the perceived eeriness. Similar negative effects of empathy have been demonstrated for other artificial agents. For example, Stein and Ohler [ ] reported that users perceived an empathic virtual reality avatar as eerier. A robot expressing a higher (vs a lower) ability to infer the mental states of its users was demonstrated to be perceived as creepier [ ]. More generally, in some studies, the anthropomorphic robot features were demonstrated to be negatively related to user attitudes [ , ], and emotions displayed by chatbots may harm user autonomy [ ].The existing evidence on the consequences of chatbot empathy provides no conclusion regarding user reactions to vaccine promotion chatbots, including CPI and chatbot persuasiveness (ie, the degree to which user vaccination intent changes after the conversation with the chatbot). Furthermore, although previous studies suggest that the way of empathizing (ie, too “humanlike” expression) may lead to a negative response, it is unclear how the response to vaccination chatbot empathy depends on the context related to the conversation, such as the conversation quality and chatbot expertise cues (eg, the chatbot’s endorser credibility). Conversation quality is the actual quality of the information provided by a chatbot to specific users' questions, which is considered a key component of chatbot performance [
]. Chatbot expertise cues may increase initial trust toward the chatbot [ ] and, therefore, act as an important factor in the response to its empathy/autonomy support expression. This gap is also relevant for vaccine policy makers and developers of vaccination chatbots as both conversation quality and chatbot expertise cues are directly manageable. To bridge this gap, we aim to investigate how the relationship between the user response to COVID-19 vaccination chatbots and the chatbots’ expressed empathy/autonomy support depends on conversation quality and the chatbots’ endorser credibility.We propose that the effect of vaccination chatbot empathy/autonomy support expression on user response may be moderated by the conversation quality. In general, the quality of chatbot performance consists of how well a chatbot matches its predefined responses with the user’s request and how well the set of the chatbot’s predefined responses covers the requests of its users [
]. Accordingly, chatbot performance at the conversation (or interaction) level may be indicated by the relevance of provided answers versus user requests, the presence of predefined chatbot responses and questions versus user requests, and the (low) frequency of failures (ie, instances when a chatbot cannot provide a response to a user request). Anthropomorphizing conversation agents and service agents may raise user expectations related to the quality of interaction [ , ] and thus make users more sensitive to interaction problems, as suggested by Pardo et al [ ]. Here, one may refer to incongruity theory [ , ], positing that unresolved incongruity between stimuli (here the expressed empathy/autonomy support and the actual performance of the chatbot) may lead to negative reactions. In other words, users may react less positively (or even more negatively) to a chatbot's declarations of its empathy/autonomy support when they are incongruent with the actual quality of the output of the chatbot. Therefore, it seems plausible that the effect of vaccination chatbot empathy/autonomy support expression on the user response is less positive (or more negative) when the actual answers provided by the chatbot are of lower quality (eg, when the chatbot responds that it does not understand the user and it cannot answer their question). Considering this, our research aims to test the following:Hypothesis (H)1: The relationship between the user response to COVID-19 vaccination chatbots (ie, CPI and chatbot persuasiveness in terms of COVID-19 vaccination) and their expressed empathy/autonomy support is less positive (or more negative) when the conversation quality is lower.
Chatbot endorser credibility may be an additional moderator of the chatbot empathy/autonomy support expression effect. Namely, a more credible endorser (ie, expert vs layperson) may act as an expertise cue and therefore produce a higher level of initial trust toward the chatbot [
]. Then, a more positive response to the chatbot empathy/autonomy support expression may be plausible as being more consistent with that initial attitude. Formally:H2: The relationship between the user response to COVID-19 vaccination chatbots (ie, CPI and chatbot persuasiveness in terms of COVID-19 vaccination) and their expressed empathy/autonomy support is more positive (or less negative) when the chatbot is endorsed by an expert (vs a layperson).
Our experimental study aims to test user reactions to COVID-19 vaccination chatbots with different levels of expressed empathy/autonomy support, examining the proposed moderating role of conversation quality and chatbot endorser credibility.
Methods
Sample
In total, 211 Dutch-speaking adults living in Belgium (aged 18-60 years) and not vaccinated (yet) participated in an online experiment. The participants were panel respondents of the market research company Bilendi, Belgium. Their panel was recruited through various sources, such as the Bilendi website, banner media websites, member-get-member, third-party databases, and offline recruitment. To join the panel, respondents had to complete a sociodemographic questionnaire, approve a double opt-in where they confirmed they wanted to participate in Bilendi’s surveys, and accept their privacy policy. The participants were approached for the survey through personal email invitations and reminders. To thank the respondents for their contribution and encourage them to complete the online experiment, Bilendi granted points that the respondents could save up and swap for a prize later. The data collection was conducted in 2021, between May 28 and June 3, while daily new cases in Belgium were around 1400 [
], and around 54% of the adult population in Flanders and Brussels (where Dutch is an official language) were vaccinated against COVID-19 with at least 1 dose [ ]. After the preintervention survey, the participants were redirected to the chatbot conversation (ie, the intervention) and then redirected to the postintervention survey. Due to logging problems, the responses of 15 (7.1%) participants could not be linked back to their IDs. Specifically, in the postintervention survey, all the participants were asked to manually fill out a code generated after their conversation with the chatbot. In 15 (7.1%) cases, a blank, false, or nonexisting code was filled out. Therefore, those responses could not be linked to the participants’ conversations. In similar future experiments, the logging should be automated to prevent such kind of data loss. Therefore, the final sample consisted of 196 (92.9%) participants (mean age 39.4, SD 9.4 years; n=112, 57.1%, females; see the sample characteristics in ). At the time of surveying, 63.8% (n=125) of our participants had not yet received an invitation to be vaccinated against COVID-19, 33.7% (n=66) were invited and waited for the first shot, and 2.6% (n=5) were invited and refused. The vast majority of participants (n=186, 94.9%) declared they had not contracted COVID-19.Characteristics | Total sample, n (%) | Present empathy/autonomy support expression, n (%) | Absent empathy/autonomy support expression, n (%) | |
Gender | ||||
Women | 112 (57.1) | 64 (63.4) | 48 (50.5) | |
Men | 84 (42.9) | 37 (36.6) | 47 (49.5) | |
Age (years) | ||||
<26 | 14 (7.1) | 8 (7.9) | 6 (6.3) | |
26-35 | 60 (30.6) | 37 (36.6) | 23 (24.2) | |
36-45 | 64 (32.7) | 32 (31.7) | 32 (33.7) | |
46-55 | 55 (28.1) | 24 (23.8) | 31 (32.6) | |
>55 | 3 (1.5) | N/Aa | 3 (3.2) | |
Education | ||||
Primary | 1 (0.5) | 1 (1.0) | N/A | |
Secondary | 40 (20.4) | 13 (12.9) | 27 (28.4) | |
Bachelor’s degree | 81 (41.3) | 44 (43.6) | 37 (38.9) | |
Master’s degree | 71 (36.2) | 41 (40.6) | 30 (31.6) | |
Doctoral degree | 3 (1.5) | 2 (2.0) | 1 (1.1) |
aN/A: not applicable.
Stimuli
As experimental stimuli, we used VaccinChat, a chatbot providing COVID-19 vaccination information. VaccinChat was developed by the CLiPS (Computational Linguistics, Psycholinguistics and Sociolinguistics) research center associated with the Department of Linguistics of the Faculty of Arts of the University of Antwerp, Belgium. The chatbot is a RASA [
] artificial intelligence (AI)–equipped frequently asked questions (FAQ) chatbot, trained to classify user questions provided in open text fields into predefined answer intents, and it has been publicly available as a development version. It does not consider a specific conversation flow but answers questions in series. The final version is trained on 12,883 questions referring to 181 answers [ ].Procedure
The experiment had a 2 (chatbot empathy/autonomy support expression: present vs absent) × 2 (endorser credibility: expert vs layperson) between-subject design. The participants were randomly assigned to the experimental conditions, however, striving for equal gender distribution. The final sample sizes were Npresent=101 and Nabsent=95 across the empathy/autonomy support expression condition and Nexpert=111 and Nlayperson=85 across the endorser credibility condition (experimental cell sizes: Nempathy/autonomy present, expert=57, Nempathy/autonomy present, layperson=44, Nempathy/autonomy absent, expert=54, Nempathy/autonomy absent, layperson=41). No significant differences in initial vaccination intention occurred across the experimental condition (empathy/autonomy support expression condition: t194=0.336, P=.74; endorser credibility condition: t194=0.719, P=.47). In addition, no significant differences occurred in demographics for empathy/autonomy support expression condition (gender: χ21=3.296, P=.07; age range: χ24=7.266, P=.12; education: χ24=8.367, P=.08). For endorser credibility condition, there were no significant differences in age and education (age range: χ24=1.628, P=.80; education: χ24=1.719, P=.79), but the percentage of women was higher in the expert condition (n=71 vs 41, 64.0% vs 48.0%, χ21=4.863, P=.03). To ensure the sample characteristics would not disturb the tested effects, we involved gender, age range, education, and initial vaccination intention as covariates in all analyses.
The participants first received an invitation mail with the survey link randomly redirecting them to the 4 experimental conditions. After the participant provided informed consent, the preintervention survey started with demographics and a question on COVID-19 vaccination status, initial COVID-19 vaccination intent, and personal experience with COVID-19. Next, the participants were informed about the chatbot, they received the related instructions and time recommendations, and they were redirected to the chatbot and asked to start the conversation. We encouraged the participants to submit their questions and concerns about COVID-19 vaccinations. The instruction was as follows:
In a moment, you will be asked to start a conversation with the chatbot. A chatbot is a kind of computer program. There is no real person behind the chat. The conversation will open in a new tab. We are interested in your experience with the chatbot. The idea is that you will submit any questions or concerns you may have about the COVID-19 vaccines and vaccination to the chatbot. These questions can be very practical but can also be about genuine concerns or serious vaccine doubts. If the chatbot does not understand your question, please try to word your question differently. We ask you to have a conversation with the chatbot for a minimum of 3 minutes and a maximum of 5 minutes.
Before starting a conversation with the chatbot, the participants read a brief endorsement claiming, “The chatbot gives a clear answer to the questions and concerns about the COVID-19 vaccines,” and that the chatbot is “particularly on point.” In the expert endorser condition, this endorsement was provided by “the well-known vaccinologist, Prof. Dr. Pierre Van Damme.” In the layperson endorsement condition, the endorsers were described as “a group of Flemings with questions about vaccines, including vaccine doubters.”
The chatbot worked differently depending on the empathy/autonomy support expression condition. Namely, in the present empathy/autonomy support expression condition, the chatbot displayed additional phrases to express its empathy and autonomy support. Specifically, the chatbot showed understanding of the user’s concerns (eg, “I understand your concern. This is ultimately about your own body.”), showed appreciation for the user’s knowledge (eg, “Good that you are aware of this.”), supported the user’s autonomy regarding the decision to get vaccinated (eg, “Whether or not you vaccinate your children, when possible, remains, of course, a personal choice.”), showed interest in the user’s situation (eg, “I hope with you that we can soon pick up life again in all its glory.”), and comforted the user by referring to the social context (eg, “There are many people who are currently facing this question.”). See all phrases appearing in the predefined set of the chatbot's answers in
.Way of expressing empathy/autonomy support | Phrases in the predefined set of the chatbot\'s answers (listed in alphabetical order) |
Showing an understanding of the user’s concerns |
|
Showing appreciation for the user |
|
Expressing autonomy support regarding the decision to get vaccinated |
|
Showing interest in the user’s situation |
|
Comforting the user by referring to the social context |
|
These phrases (
) were removed in the absent empathy/autonomy support expression condition.The empathy/autonomy support phrases were attached to predefined chatbot answers (called adapted answers). The occurrence of those answers in a conversation depended on the questions asked by each participant. In the actual conversations made with the participants in our study, the mean share of the adapted answers (ie, the ones belonging to the predefined set of the chatbot's answers differing between the high and low empathy/autonomy support expression condition) was 63% (SD 0.2349). To ensure that the share of adapted answers for empathy/autonomy support expression at the participant level would not disturb the tested effects, we involved it as an additional covariate in all analyses.
The most frequently asked questions were similar to those reported by Buhmann et al [
], such as out-of-domain questions, general COVID-19 questions that do not consider vaccination (eg, regulations), questions about side effects of the vaccines, and questions about who will get their vaccine when and where. User questions that deal with trust issues mainly focused on vaccines in general, science, and governmental institutes, and policies. The average number of user questions was similar across the empathy/autonomy support condition (meanpresent 8.181, SD 7.364; meanabsent 8.787, SD 9.598; F1,187=0.232, P=.63) and the endorser credibility condition (meanexpert 8.022, SD 9.688; meanlayperson 8.945, SD 6.674; F1,187=0.548, P=.46). In addition, the average length of the user questions was similar across the conditions (empathy/autonomy support condition: meanpresent 8.327, SD 4.403; meanabsent 7.669, SD 3.043; F1,187=1.401, P=.24; endorser credibility condition: meanexpert 7.633, SD 3.021; meanlayperson 8.363, SD 4.637; F1,187=1.752, P=.19).After their conversation with the chatbot, the participants had to use the conversation code they received from the chatbot at the beginning of the conversation to start the postintervention survey. Based on that code, we were able to link their answers to their IDs. Next, we measured the participants’ perceived autonomy during the conversation, CPI, and the intention to get vaccinated against COVID-19. They were debriefed that the chatbot gave different responses to different participants, but the information provided by the chatbot did not contain any factual falsehoods. Finally, the participants were asked for feedback, thanked, and received monetary compensation.
Measurements
Vaccination Intention Shift
The initial COVID-19 vaccination intention was measured right after the question about the COVID-19 vaccination status with the item “If you had to make a choice now, will you get vaccinated or not, as soon as the opportunity presents itself?” The after-conversation intention was measured with the item “After talking to the chatbot, will you get vaccinated against COVID-19 as soon as you get an invitation?” Both items used a 6-point response format ranging from 1 (very unlikely/certainly not) to 6 (very likely/certainly so). The vaccination intention shift (VIS) was determined by subtracting the initial COVID-19 intention from the after-conversation intention and coded from –5 (indicating the highest-possible decrease) to 5 (indicating the highest-possible increase).
Perceived User Autonomy
Here, 10 items adapted from Smit et al [
] measured the perceived support for user autonomy (perceived user autonomy [PUA]) in a virtual care setting (here, during the conversation with the chatbot; α=.905). The statements were related to giving freedom of choice (eg, “I feel that the conversation with the chatbot has given me choices and options.”) and empathy (eg, “I felt understood during the conversation with the chatbot.”); see details in . The items used a 5-point response format ranging from 1 (do not agree at all) to 5 (strongly agree).Scale | Item | Reliability, α | Reference |
PUAa |
| .905 | Adapted from Smit et al [ | ]
CPIb |
| .921 | Similar to Zarouali et al [ | ]
aPUA: perceived user autonomy.
bCPI: chatbot patronage intention.
Chatbot Patronage Intention
Here, 3 items measured the patronage intention toward the chatbot (CPI; α=.921). Similar items have been frequently used in previous research (eg, [
]). The statements were related to the willingness and likelihood of reusing and recommending the chatbot (see details in ). The items used the same response format as in the PUA measurement.Chatbot Conversation Quality
In addition to the respondents' perceptions, we analyzed the quality of actual answers provided by the chatbot for each respondent using the following metrics based on conversation logs:
- Conversation accuracy (CA): The percentage of correct answers in the conversation (an answer's correctness was determined manually by assessing the relevance of the answer provided by the chatbot to the user's question). The mean CA was 53.0% (SD 0.24295); 100% CA was reached for 9.2% (n=18%) of the participants, and it exceeded 50% for 45.4% (n=89) of the participants.
- Conversation-in-database (CD): The percentage of answers in the conversation that were exactly the same as questions that were already included in the chatbot training data. The mean CD was 5.9% (SD 0.11294); for 70.4% (n=138) of the participants, the CD was 0% (no exact answers), and it did not exceed 50% for the other participants.
- CF: The percentage of answers in the conversation that took the form of ”I don't understand,“ followed by a request to rephrase the question. The mean CF was 4.0% (SD 0.08979); the CF was 0% for 77.6% (n=152) of the participants (no fallbacks), and it did not exceed 50% for 99.5% (n=195) of the participants.
- Conversation wrong-answer (WN): The percentage of user questions in the conversation for which the chatbot did not have a suitable answer and provided an answer that corresponded to another question. The mean WN was 12.5% (SD 0.17254); the WN was 0% for 52.6% (n=103) of the participants (no wrong answers), and it did not exceed 50% for 97.4% (n=191) of the participants.
Most errors could be attributed to questions that were not yet included in the chatbot at the moment of the intervention. For example, a number of participants asked questions about the possibility of a third vaccination round because this had been reported in the media. The chatbot, however, had not been updated to deal with this information at that time.
Ethical Considerations
The Ethics Committee for the Social Sciences and Humanities of the University of Antwerp, Belgium (code of ethical approval: SHW_21_57) approved this research. Prior to the study, all respondents signed the informed consent form. All data were anonymously collected; we only kept track of an assigned response ID. All human experimentation guidelines were respected. Our participants received monetary compensation for their contribution to our online experiment.
Results
Effects of Different Conditions
First, we ran 2 ANCOVAs to check the main effects of chatbot empathy/autonomy support expression on chatbot evaluation separately for PUA and CPI as dependent variables. The empathy/autonomy support expression condition (present vs absent) and the endorser credibility condition (expert vs layperson) were factors, and the share of adapted answers for empathy/autonomy support expression, gender, age, education, and initial vaccination intention were covariates. No significant manipulation effects occurred under the following conditions:
- PUA under the empathy/autonomy support expression condition: meanpresent 2.935, SD 0.747; meanabsent 3.045, SD 0.864; F1,187=0.906, P=.34
- PUA under the endorser credibility condition: meanexpert 2.983, SD 0.858; meanlayperson 2.996, SD 0.736; F1,187=0.012, P=.91
- CPI under the empathy/autonomy support condition: meanpresent 3.150, SD 1.152; meanabsent 3.450, SD 1.289; F1,187=3.023, P=.08
- CPI under the endorser credibility condition: meanexpert 3.293, SD 1.233; meanlayperson 3.306, SD 1.223; F1,187=0.006, P=.94
In addition, the 2 manipulations had no interaction effects (for PUA: F1,187=0.735, P=.39; for CPI: F1,187=0.327, P=.57), providing no support for H2.
Next, we checked the role of actual answers provided by the chatbot in the relationship between the empathy/autonomy support expression and chatbot evaluation. Specifically, we checked the moderation effects of conversational quality variables. We ran a series of moderation analyses (PROCESS macro [
], model 1) with the empathy/autonomy support expression condition (dummy-coded as 1=present, 0=absent) as an independent variable, PUA as a dependent variable, and a conversation quality variable as a moderator. The endorser credibility condition (dummy-coded as 1=expert, 0=layperson), the share of adapted answers for empathy/autonomy support expression, gender (dummy-coded as 1=women, 0=men), age, education, and initial vaccination intent served as covariates. We included the endorser credibility condition as a covariate to ensure the differences in its share between the empathy/autonomy support expression condition would not disturb the tested effects. No moderation effects occurred with the following moderators: CA (B=0.396, SE 0.434, t186=0.911, P=.36), CD (B=–0.119, SE 1.154, t186=0.103, P=.92), and WN (B=–0.362, SE 0.679, t186=.533, P=.59). However, in the moderation model with CF as a moderator ( ; R2=0.145, F9,186=3.507, P<.001, variance inflation factors [VIFs]<1.2), the interaction effect of CF and empathy/autonomy support expression was negative (B=–3.358, SE 1.235, t186=2.718, P=.007), in line with H1. The conditional effect of empathy/autonomy support expression at the CF level of +1SD was negative ( ; B=–0.405, SE 0.158, t186=2.564, P=.011), while being nonsignificant at the mean level (B=–0.103, SE 0.113, t186=0.914, P=.36) and the –1SD level (B=0.031, SE 0.123, t186=.252, P=.80). The Johnson-Neyman cutoff value of CF was 4.0% ( ). The conditional effects of empathy/autonomy support expression at CF levels below the cutoff were nonsignificant (P>.10), while for levels above the cutoff, the effects were significantly negative (P<.05). These results indicate that when CF is higher than 4.0%, empathy/autonomy support expression decrease PUA, while empathy/autonomy support expression does not have an impact on PUA when CF is lower than 4.0%.We used CF as a moderator in further analysis. Aiming to check the consequences of the CF × empathy/autonomy support expression interaction on behavioral outcomes, we ran 2 first-stage moderated mediation analyses (PROCESS macro [
], model 7, 5000 bootstrap samples). Both analyses used the empathy/autonomy support expression condition (dummy-coded as 1=present, 0=absent) as an independent variable, PUA as a mediator, and CF as a first-stage moderator. The set of covariates was the same as in the former analysis (VIF<1.2).In the first analysis, CPI served as a dependent variable (
; R2=0.601, F9,187=35.237, P<.001). The effect of PUA on CPI was positive (B=1.095, SE 0.074, t187=14.719, P<.001), and the moderated mediation index was negative (index=–3.676, BootSE 1.614, 95% CI –6.697 to –0.102). The conditional indirect effect of empathy/autonomy support expression on CPI at the CF level of +1SD was negative (B=–0.443, BootSE 0.202, 95% CI –0.809 to –0.005), while 95% CIs included 0 at the mean level (B=–0.113, BootSE 0.124, 95% CI –0.346 to 0.137) and the –1SD level (B=0.034, BootSE 0.132, 95% CI –0.224 to 0.305).In the second analysis, VIS served as a dependent variable (
; R2=0.086, F9,187=2.194, P=.03). The effect of PUA on VIS was positive (B=0.138, SE 0.051, t187=2.704, P=.008), and the moderated mediation index was marginally negative (index=–0.464, BootSE 0.302, 90% CI –0.978 to –0.021). The conditional indirect effect of empathy/autonomy support expression on VIS at the CF level of +1SD was marginally negative (B=–0.056, BootSE 0.036, 90% CI –0.118 to –0.002), while being nonsignificant at the mean level (B=–0.014, BootSE 0.018, 90% CI –0.044 to 0.014) and the –1SD level (B=0.004, BootSE 0.019, 90% CI –0.023 to 0.039).CF | Effect | SE | t188 | P value | 95% CI |
–0.0399 | 0.0310 | 0.1232 | 0.2517 | .80 | –0.2121 to 0.2742 |
–0.0121 | –0.0623 | 0.1138 | –0.5472 | .59 | –0.2867 to 0.1622 |
0.0157 | –0.1555 | 0.1143 | –1.3613 | .18 | –0.3810 to 0.0699 |
0.0421 | –0.2445 | 0.1239 | –1.9728 | .05 | –0.4889 to 0 |
0.0434 | –0.2488 | 0.1246 | –1.9972 | .047 | –0.4946 to –0.0030 |
0.0712 | –0.3421 | 0.1426 | –2.3985 | .02 | –0.6235 to –0.0607 |
0.099 | –0.4354 | 0.1659 | –2.6245 | .009 | –0.7627 to –0.1081 |
0.1268 | –0.5287 | 0.1925 | –2.7465 | .007 | –0.9085 to –0.1489 |
0.1545 | –0.622 | 0.2212 | –2.8114 | .006 | –1.0584 to –0.1855 |
0.1823 | –0.7153 | 0.2514 | –2.8455 | .005 | –1.2112 to –0.2194 |
0.2101 | –0.8086 | 0.2825 | –2.8625 | .005 | –1.3658 to –0.2513 |
0.2379 | –0.9018 | 0.3142 | –2.8700 | .005 | –1.5217 to –0.2819 |
0.2657 | –0.9951 | 0.3465 | –2.8721 | .005 | –1.6787 to –0.3116 |
0.2934 | –1.0884 | 0.3791 | –2.8711 | .005 | –1.8363 to –0.3405 |
0.3212 | –1.1817 | 0.4120 | –2.8683 | .005 | –1.9945 to –0.3689 |
0.349 | –1.2750 | 0.4451 | –2.8645 | .005 | –2.1531 to –0.3969 |
0.3768 | –1.3683 | 0.4784 | –2.8603 | .005 | –2.3120 to –0.4245 |
0.4045 | –1.4616 | 0.5118 | –2.8558 | .005 | –2.4712 to –0.4519 |
0.4323 | –1.5548 | 0.5453 | –2.8513 | .005 | –2.6306 to –0.4791 |
0.4601 | –1.6481 | 0.5789 | –2.8469 | .005 | –2.7902 to –0.5060 |
0.4879 | –1.7414 | 0.6126 | –2.8425 | .005 | –2.9500 to –0.5328 |
0.5157 | –1.8347 | 0.6464 | –2.8384 | .005 | –3.1099 to –0.5595 |
aCF: conversation fallback.
bPUA: perceived user autonomy.
Discussion
Principal Findings
The aim of our study was to improve the understanding of the role of the conversation-related context in the effects of expressing empathy/autonomy support using COVID-19 vaccination chatbots. Specifically, we checked the moderating role of 2 conversation context factors (ie, conversation quality and chatbot expertise cues). The results of our experiment suggest that chatbot empathy/autonomy support expression may harm user evaluation and behavioral intentions related to a chatbot and diminish its usefulness in promoting COVID-19 vaccinations when the conversation quality is low (as indicated by the higher levels of CF). However, we did not find the effects of chatbot expertise cues manipulated through chatbot endorser credibility.
We demonstrated that the effect of expressing empathy/autonomy support using the COVID-19 vaccination chatbot on PUA was negative for a high level of CF, while being nonsignificant for its mean and low levels. Moreover, we showed that this moderation effect has consequences on CPI and chatbot persuasiveness. The indirect effects of empathy/autonomy support expression on CPI and VIS through PUA were negative for a high level of CF, while being nonsignificant for its mean and low levels. These results add to the growing literature on the user response to chatbots (eg, [
, , , , , , ]). Our results are in line with numerous previous studies indicating mixed or negative effects of chatbots' empathy [ , ], robots' capability to infer human mental states [ ], robots' and conversational agents' anthropomorphic features [ , ], and chatbots' displayed emotions [ ] on users' response to chatbots. Importantly, unlike these previous studies, our study was based on the measurement of the actual quality of conversation with a chatbot user instead of presenting cues or scenarios to study participants.CF (ie, the chatbot replying ”I do not understand“) may represent the deficiency of actual chatbot empathy and, in turn, its limited support for user autonomy as the chatbot fails to resolve the request of the user. As such, higher levels of CF may indicate to the chatbot users that the chatbot's actual answers lack empathy and support for user autonomy. Considering this, the negative interaction of chatbot empathy/autonomy support expression and CF that we revealed is in line with the notion that chatbot anthropomorphism (here, in the form of empathy/autonomy support expression) may increase user expectations and sensitiveness for the quality of interaction [
, ]. Namely, a higher CF indicates an issue with empathy and support for user autonomy in actual chatbot answers, and users are more sensitive to this issue when chatbot empathy/autonomy support expression is higher. Although Pelau et al [ ] demonstrated the positive relationship between perceived AI empathy and perceived AI interaction quality, we discerned the negative interaction effects of manipulated chatbot empathy/autonomy support expression and the actual conversation quality. Put differently, when CF is higher, the higher empathy/autonomy support expression increases the incongruity in empathy/autonomy support between the chatbot's actual answers and its declarations (expressions). This incongruity may produce negative user reactions [ , ]. This way, we shed new light on the existing chatbot research focusing on incongruities in chatbot perception [ ]. Particularly, our results highlight that the positive effects of chatbot empathy, suggested by the CASA approach [ , ], diminish not only when a chatbot is too humanlike, as proposed by UVM theory [ ], but also in the presence of incongruity between the declared empathy and the chatbot’s actual outputs.In terms of Sundar's [
] framework of human-AI interactions, the incongruity in empathy/autonomy support between the actual answers and the chatbot’s declarations may represent the discrepancy between the cue route (based on visible attributes of the AI system, such as empathy expression) and the action route (based on the effects of human-AI collaboration, such as the actual quality of conversation). This way, our results suggest that those 2 routes may influence the user response to an AI-based system not only separately but also via their interplay. We applied these chatbot communication considerations to vaccination intention, showing that this incongruity may harm the persuasiveness of vaccination chatbots. This way, we have contributed to the literature on vaccination promotion through chatbots (eg, [ , , ]).No interaction effects of expressing empathy/autonomy support and the chatbot’s endorser credibility might be caused by our experimental setting, which involved extensive real interactions with the chatbot. In this case, the user response might be more influenced by the actual quality conversation, and therefore, expertise cues were discounted. Interestingly, endorser credibility also had no significant main effect on the user response.
Limitations
Certain limitations of our study call for further research. First, our study used the Dutch-speaking population of Belgian adults who are demonstrated to have relatively positive attitudes toward vaccinations (eg, 84.0% agreement with ”Vaccines are effective“ [
]). In our sample, the majority of the respondents (77.8%) declared to be ”very likely“ to get vaccinated against COVID-19 as the initial intention. This might produce a ceiling effect and thus limit the observed effects of our chatbot on VIS. Therefore, future studies should cover populations with more negative vaccine attitudes (eg, Latvian adults, with 70.9% agreement with ”Vaccines are effective“ [ ]).Second, we were limited to a specific stage of the COVID-19 pandemic (ie, about 1 year after introducing the vaccines) and a specific preventable illness. Further research can extend the investigation of chatbot empathy/autonomy support effectiveness in later pandemic stages (eg, when COVID-19 became endemic) and other preventable illnesses (eg, influenza).
Third, we conceptualized and operationalized chatbot empathy and autonomy support together. Therefore, future studies examining the effects of chatbot empathy/autonomy -support expression should distinguish between those 2 constructs in experimental manipulation and measurements. Next, further studies may investigate deeper the mechanisms underlying the negative effects of the interaction of empathy/autonomy support expression and conversation quality, which were demonstrated in this paper. Specifically, whether those effects result from chatbot empathy being more likely to evoke an eerieness perception (as considered in UVM theory [
]) when the quality of the information provided by the chatbot is lower may be checked.Fourth, we measured conversation quality instead of manipulating it. Although this approach enhances realism, as the quality of answers provided by chatbots may heavily depend on the interactions with their users, manipulating this variable should provide better causal evidence of its moderating role. Therefore, future experiments can compare conditions differing in the percentage of questions a chatbot fails to answer.
Lastly, our null result regarding the moderating role of expertise cues in the effect of expressing empathy autonomy support may result from our experimental setting, in which the participants were involved in extensive interaction with the chatbot. Perhaps, in the case of less engaging interactions, expertise cues play a larger role. This possibility should be examined in further research.
Conclusion
Our study underlined the pitfalls of making vaccination chatbots empathetic in specific conversation-related contexts, namely too low conversation quality. The results may guide vaccine policy makers and chatbot developers to be careful with communicating chatbot empathy/autonomy support unless its capability to address user questions is firmly proven. Therefore, introducing a chatbot's empathy/autonomy support expression should be preceded by assessing the quality of answers provided by the chatbot. For example, in their early stage of development, chatbots with a high level of CF should hesitate to express their empathy and support for user autonomy. However, it is difficult to keep up with information flow manually when information changes quickly, like in a pandemic. Consequently, there may always be user questions for which the chatbot does not yet know appropriate answers, ending up with fallback. Automatic text-mining tools or a model, such as ChatGPT, could be used to scrape new information from the web more easily. In addition, an instruction model, such as ChatGPT, could provide a multitude of exemplary user questions. These could be used as extra training data, speeding up and improving the chatbot’s continuous growth, thus avoiding or reducing the number of fallback answers and, eventually, allowing the effective use of chatbot empathy/autonomy support expression.
Acknowledgments
This research received funding from (1) the Flemish government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” program and (2) the King Baudoin Foundation of Belgium.
Data Availability
The data set generated and analyzed during this study is available in the Open Science Framework, the repository of the Center for Open Science [
].Authors' Contributions
KP was the project's principal investigator. KP supervised the study design, data collection, analysis, and manuscript development. KP, TC, and JB were responsible for the study design and data collection. JB and WD developed the technical part of the experimental tool. ADW, GH, and PVD developed the content part of the experimental tool. WT (corresponding author) worked on data analysis and manuscript writing. TC and JB contributed to data processing, analysis, and writing of the Methods section of the manuscript. TC, JB, ADW, GH, PVD, WD, and KP reviewed the manuscript.
Conflicts of Interest
None declared.
References
- Altay S, Hacquin A, Chevallier C, Mercier H. Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and intentions. J Exp Psychol: Applied 2023 Mar 28;29(1):52-62. [CrossRef] [Medline]
- Amiri P, Karahanna E. Chatbot use cases in the Covid-19 public health response. J Am Med Inform Assoc 2022 Apr 13;29(5):1000-1010 [FREE Full text] [CrossRef] [Medline]
- Wang H, Zhang Q, Ip M, Fai Lau JT. Social media–based conversational agents for health management and interventions. Computer 2018 Aug;51(8):26-33. [CrossRef]
- COVID-19 vaccine tracker. European Centre for Disease Prevention and Control. URL: https://vaccinetracker.ecdc.europa.eu/public/extensions/COVID-19/vaccine-tracker.html#uptake-tab [accessed 2023-02-01]
- Pertwee E, Simas C, Larson HJ. An epidemic of uncertainty: rumors, conspiracy theories and vaccine hesitancy. Nat Med 2022 Mar 10;28(3):456-459. [CrossRef] [Medline]
- Liu B, Sundar SS. Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychol Behav Soc Netw 2018 Oct;21(10):625-636. [CrossRef] [Medline]
- Miller CH, Lane LT, Deatrick LM, Young AM, Potts KA. Psychological eactance and promotional health messages: the effects of controlling language, lexical concreteness, and the restoration of freedom. Human Comm Res 2007 Apr;33(2):219-240 [FREE Full text] [CrossRef]
- Quick BL, Shen L, Dillard JP. Reactance theory and persuasion. In: Dillard JP, Shen L, editors. The Sage Handbook of Persuasion. Thousand Oaks, CA: Sage; 2013.
- de Gennaro M, Krumhuber EG, Lucas G. Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Front Psychol 2019;10:3061 [FREE Full text] [CrossRef] [Medline]
- Söderlund M. Service robots with (perceived) theory of mind: an examination of humans’ reactions. J Retail Consum Serv 2022 Jul;67:102999. [CrossRef]
- Stein J, Ohler P. Venturing into the uncanny valley of mind—the influence of mind attribution on the acceptance of human-like characters in a virtual reality setting. Cognition 2017 Mar;160:43-50. [CrossRef] [Medline]
- Ling EC, Tussyadiah I, Tuomi A, Stienmetz J, Ioannou A. Factors influencing users' adoption and use of conversational agents: a systematic review. Psychol Mark 2021 Apr 08;38(7):1031-1051. [CrossRef]
- Yang Y, Liu Y, Lv X, Ai J, Li Y. Anthropomorphism and customers’ willingness to use artificial intelligence service agents. J Hosp Mark Manag 2021 Jul 02;31(1):1-23. [CrossRef]
- Fazzinga B, Galassi A, Torroni P. An argumentative dialogue system for COVID-19 vaccine information. In: Baroni P, Benzmuller C, Wang YN, editors. Logic and Argumentation. CLAR 2021. Lecture Notes in Computer Science, 13040. Cham: Springer; 2021.
- Woo WL, Gao B, Al-Nima RRO, Ling WK. Development of conversational artificial intelligence for pandemic healthcare query support. IJAIML 2020;1(1):54-79.
- Liew TW, Tan SM, Tee J, Goh GGG. The effects of designing conversational commerce chatbots with expertise cues. 2021 Presented at: 14th International Conference on Human System Interaction (HSI); 2021; Gdańsk, Poland p. 1-6. [CrossRef]
- Mohr DC, Schueller SM, Montague E, Burns MN, Rashidi P. The behavioral intervention technology model: an integrated conceptual and technological framework for eHealth and mHealth interventions. J Med Internet Res 2014 Jun 05;16(6):e146 [FREE Full text] [CrossRef] [Medline]
- Betsch C, Schmid P, Heinemeier D, Korn L, Holtmann C, Böhm R. Beyond confidence: development of a measure assessing the 5C psychological antecedents of vaccination. PLoS One 2018 Dec 7;13(12):e0208601 [FREE Full text] [CrossRef] [Medline]
- Betsch C, Bach Habersaat K, Deshevoi S, Heinemeier D, Briko N, Kostenko N, et al. Sample study protocol for adapting and translating the 5C scale to assess the psychological antecedents of vaccination. BMJ Open 2020 Mar 10;10(3):e034869 [FREE Full text] [CrossRef] [Medline]
- Dorman C, Perera A, Condon C, Chau C, Qian J, Kalk K, et al. Factors associated with willingness to be vaccinated against COVID-19 in a large convenience sample. J Community Health 2021 Oct 09;46(5):1013-1019 [FREE Full text] [CrossRef] [Medline]
- Hossain MB, Alam MZ, Islam MS, Sultan S, Faysal MM, Rima S, et al. Health belief model, theory of planned behavior, or psychological antecedents: what predicts COVID-19 vaccine hesitancy better among the Bangladeshi adults. Front Public Health 2021;9:711066 [FREE Full text] [CrossRef] [Medline]
- Pelegrín-Borondo J, Arias-Oliva M, Almahameed AA, Prado Román M. Covid-19 vaccines: a model of acceptance behavior in the healthcare sector. Eur Res Manag Bus Econ 2021 Sep;27(3):100171. [CrossRef]
- Palm R, Bolsen T, Kingsland JT. The effect of frames on COVID-19 vaccine resistance. Front Polit Sci 2021 May 13;3:41. [CrossRef]
- Reiter PL, Pennell ML, Katz ML. Acceptability of a COVID-19 vaccine among adults in the United States: how many people would get vaccinated? Vaccine 2020 Sep 29;38(42):6500-6507 [FREE Full text] [CrossRef] [Medline]
- Wheelock A, Parand A, Rigole B, Thomson A, Miraldo M, Vincent C, et al. Socio-psychological factors driving adult vaccination: a qualitative study. PLoS One 2014 Dec 9;9(12):e113503 [FREE Full text] [CrossRef] [Medline]
- Mohanty S, Leader AE, Gibeau E, Johnson C. Using Facebook to reach adolescents for human papillomavirus (HPV) vaccination. Vaccine 2018 Sep 25;36(40):5955-5961. [CrossRef] [Medline]
- Wallace C, Corben P, Turahui J, Gilmour R. The role of television advertising in increasing pneumococcal vaccination coverage among the elderly, North Coast, New South Wales, 2006. Aust N Z J Public Health 2008 Oct;32(5):467-470 [FREE Full text] [CrossRef] [Medline]
- Graeber D, Schmidt-Petri C, Schröder C. Attitudes on voluntary and mandatory vaccination against COVID-19: evidence from Germany. PLoS One 2021 May 10;16(5):e0248372 [FREE Full text] [CrossRef] [Medline]
- Taber JM, Thompson CA, Sidney PG, O'Brien A, Updegraff J. Promoting vaccination with lottery incentives. PsyArXiv. Preprint posted online July 23, 2021 [FREE Full text]
- Macdonald L, Cairns G, Angus K, de Andrade M. Promotional communications for influenza vaccination: a systematic review. J Health Commun 2013 Dec;18(12):1523-1549. [CrossRef] [Medline]
- Boccalini S, Bonanni P, Chiesi F, Pisa GD, Furlan F, Giammarco B, et al. The experience of VaccinarSinToscana website and the role of new media in promoting vaccination. Vaccines (Basel) 2020 Nov 03;8(4):644 [FREE Full text] [CrossRef] [Medline]
- Steffens MS, Dunn AG, Leask J, Wiley KE. Using social media for vaccination promotion: practices and challenges. Digit Health 2020 Nov 03;6:2055207620970785 [FREE Full text] [CrossRef] [Medline]
- Altay S, Schwartz M, Hacquin A, Allard A, Blancke S, Mercier H. Scaling up interactive argumentation by providing counterarguments with a chatbot. Nat Hum Behav 2022 Apr 14;6(4):579-592. [CrossRef] [Medline]
- Rathnayaka P, Mills N, Burnett D, De Silva D, Alahakoon D, Gray R. A mental health chatbot with cognitive skills for personalised behavioural activation and remote health monitoring. Sensors (Basel) 2022 May 11;22(10):3653 [FREE Full text] [CrossRef] [Medline]
- Zhang J, Oh YJ, Lange P, Yu Z, Fukuoka Y. Artificial intelligence chatbot behavior change model for designing artificial intelligence chatbots to promote physical activity and a healthy diet. J Med Internet Res 2020 Sep 30;22(9):e22845 [FREE Full text] [CrossRef] [Medline]
- Hill J, Randolph Ford W, Farreras IG. Real conversations with artificial intelligence: a comparison between human–human online conversations and human–chatbot conversations. Comput Hum Behav 2015 Aug;49:245-250. [CrossRef]
- Sundar SS, Kim J. Machine heuristic: when we trust computers more than humans with our personal information. 2019 Presented at: 2019 CHI Conference on Human Factors in Computing Systems; May 4-9, 2019; Glasgow, Scotland. [CrossRef]
- Van Baal ST, Le STT, Fatehi F, Verdejo-Garcia A, Hohwy J. Effecting behaviour change using an artificial intelligence chatbot: a pilot randomised controlled study. PsyArXiv. Preprint posted online June 23, 2022. [CrossRef]
- Hong Y, Piao M, Kim J, Lee J. Development and evaluation of a child vaccination chatbot real-time consultation messenger service during the COVID-19 pandemic. Appl Sci 2021 Dec 20;11(24):12142. [CrossRef]
- Miner AS, Laranjo L, Kocaballi AB. Chatbots in the fight against the COVID-19 pandemic. NPJ Digit Med 2020 May 4;3(1):65 [FREE Full text] [CrossRef] [Medline]
- Derksen F, Olde Hartman TC, van Dijk A, Plouvier A, Bensing J, Lagro-Janssen A. Consequences of the presence and absence of empathy during consultations in primary care: a focus group study with patients. Patient Educ Couns 2017 May;100(5):987-993. [CrossRef] [Medline]
- Shen L. The effectiveness of empathy- versus fear-arousing antismoking PSAs. Health Commun 2011 Jul;26(5):404-415. [CrossRef] [Medline]
- Nass C, Moon Y. Machines and mindlessness: social responses to computers. J Soc Issues 2000 Jan;56(1):81-103. [CrossRef]
- Wang LC, Baker J, Wagner JA, Wakefield K. Can A retail web site be social? J Mark 2018 Oct 02;71(3):143-157. [CrossRef]
- Kim SY, Schmitt BH, Thalmann NM. Eliza in the uncanny valley: anthropomorphizing consumer robots increases their perceived warmth but decreases liking. Mark Lett 2019 Mar 14;30(1):1-12. [CrossRef]
- Benke I, Gnewuch U, Maedche A. Understanding the impact of control levels over emotion-aware chatbots. Comput Hum Behav 2022 Apr;129:107122. [CrossRef]
- Morrissey K, Kirakowski J. 'Realness' in chatbots: establishing quantifiable criteria. 2013 Presented at: 15th International Conference on Human-Computer Interaction. Interaction Modalities and Techniques; July 21-26, 2013; Las Vegas, NV. [CrossRef]
- Kvale K, Sell OA, Hodnebrog S, Følstad A. Improving conversations: lessons learnt from manual analysis of chatbot dialogues. 2020 Presented at: 4th International Workshop on Chatbot Research and Design; November 23-24, 2020; Virtual p. 187-200. [CrossRef]
- Pardo D, Mencia BL, Trapote ÁH, Hernández L. Non-verbal communication strategies to improve robustness in dialogue systems: a comparative study. J Multimodal User Interfaces 2010 Sep 17;3(4):285-297. [CrossRef]
- Eklund AA, Helmefalk M. Congruency or incongruency: a theoretical framework and opportunities for future research avenues. JPBM 2021 Jul 31;31(4):606-621. [CrossRef]
- Jansson-Boyd CV. Perception and consumption: Touch, multisensory integration and congruency. In: Jansson-Boyd CV, Zawisza MJ, editors. Routledge International Handbook of Consumer Psychology. England, UK: Routledge; 2017.
- Download historical data (to 20 June 2022) on the weekly number of new reported COVID-19 cases and deaths worldwide. European Centre for Disease Prevention and Control. 2022. URL: https://www.ecdc.europa.eu/en/publications-data/download-historical-data-20-june-2022-weekly-number-new-reported-covid-19-cases [accessed 2023-02-01]
- Covid vaccinations Belgium. Covid Vaccinaties België. URL: https://www.covid-vaccinatie.be [accessed 2023-04-20]
- Bocklisch T, Faulkner J, Pawlowski N, Nichol A. Rasa: open source language understanding and dialogue management. arXiv:1712.05181 Preprint posted online online Dec 14, 2017. [CrossRef]
- Buhmann J, De Bruyn M, Lotfi E, Daelemans W. Domain-and task-adaptation for VaccinChatNL, a Dutch COVID-19 FAQ answering corpus and classification model. 2022 Presented at: 29th International Conference on Computational Linguistics (COLING 2022); October 12-17, 2022; Gyeongju, Republic of Korea p. 12-17.
- Smit ES, Dima AL, Immerzeel SAM, van den Putte B, Williams GC. The virtual care climate questionnaire: development and validation of a questionnaire measuring perceived support for autonomy in a virtual care setting. J Med Internet Res 2017 May 08;19(5):e155 [FREE Full text] [CrossRef] [Medline]
- Zarouali B, Van den Broeck E, Walrave M, Poels K. Predicting consumer responses to a chatbot on Facebook. Cyberpsychol Behav Soc Netw 2018 Aug;21(8):491-497. [CrossRef] [Medline]
- Hayes AF. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. New York, NY: Guilford Publications; 2018.
- Pelau C, Dabija D, Ene I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput Hum Behav 2021 Sep;122:106855. [CrossRef]
- Lou C, Kang H, Tse CH. Bots vs. humans: how schema congruity, contingency-based interactivity, and sympathy influence consumer perceptions and patronage intentions. Int J Advert 2021 Jul 21;41(4):655-684. [CrossRef]
- Sundar SS. Rise of machine agency: a framework for studying the psychology of human–AI interaction (HAII). JCMC 2020;25(1):74-88. [CrossRef]
- Larson H, de Figueiredo A, Karafillakis E, Rawal M. State of Vaccine Confidence in the EU 2018. Luxembourg: Publications Office of the European Union, 10, 241099; 2018.
- Claessens T, Poels P, Trzebinski W. The effects of expressing empathy/autonomy-support by COVID-19 vaccination chatbot in Belgian adults: an experimental study. Open Science Framework. 2023 Feb 3. URL: https://osf.io/n7tpk/?view_only=819254dee4d14bc6b002449020b1767a [accessed 2023-04-20]
Abbreviations
AI: artificial intelligence |
CA: conversation accuracy |
CASA: Computers as Social Actors |
CD: conversation-in-database |
CF: conversation fallback |
CPI: chatbot patronage intention |
PUA: perceived user autonomy |
UVM: Uncanny Valley of Mind |
VIF: variance inflation factor |
VIS: vaccination intention shift |
WN: conversation wrong-answer |
Edited by A Mavragani; submitted 19.07.22; peer-reviewed by Y Liu, C Liebrecht; comments to author 09.12.22; revised version received 09.03.23; accepted 31.03.23; published 08.05.23
Copyright©Wojciech Trzebiński, Toni Claessens, Jeska Buhmann, Aurélie De Waele, Greet Hendrickx, Pierre Van Damme, Walter Daelemans, Karolien Poels. Originally published in JMIR Formative Research (https://formative.jmir.org), 08.05.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.