This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.
Chatbots are increasingly used to support COVID-19 vaccination programs. Their persuasiveness may depend on the conversation-related context.
This study aims to investigate the moderating role of the conversation quality and chatbot expertise cues in the effects of expressing empathy/autonomy support using COVID-19 vaccination chatbots.
This experiment with 196 Dutch-speaking adults living in Belgium, who engaged in a conversation with a chatbot providing vaccination information, used a 2 (empathy/autonomy support expression: present vs absent) × 2 (chatbot expertise cues: expert endorser vs layperson endorser) between-subject design. Chatbot conversation quality was assessed through actual conversation logs. Perceived user autonomy (PUA), chatbot patronage intention (CPI), and vaccination intention shift (VIS) were measured after the conversation, coded from 1 to 5 (PUA, CPI) and from –5 to 5 (VIS).
There was a negative interaction effect of chatbot empathy/autonomy support expression and conversation fallback (CF; the percentage of chatbot answers “I do not understand” in a conversation) on PUA (PROCESS macro, model 1, B=–3.358, SE 1.235,
The findings suggest that expressing empathy/autonomy support using a chatbot may harm its evaluation and persuasiveness when the chatbot fails to answer its users’ questions. The paper adds to the literature on vaccination chatbots by exploring the conditional effects of chatbot empathy/autonomy support expression. The results will guide policy makers and chatbot developers dealing with vaccination promotion in designing the way chatbots express their empathy and support for user autonomy.
Chatbots are used as a promising tool to promote COVID-19 vaccinations [
According to the Behavioral Intervention Technology Model [
Even amid the COVID-19 pandemic threat, vaccine hesitancy, fueled by misinformation and conspiracy theories, remains an issue in health promotion efforts [
Chatbots can be accessible to many users and enable conversations with them, supporting scaling up and personalizing health communication [
In the context of vaccination, promotion may be considered as limiting one's freedom. Namely, convincing or persuading someone to vaccinate leads the communication receiver to perform an activity (ie, vaccination) that may be perceived as intrusive and influential on their body and health. Thus, psychological reactance, a motivational state oriented toward reestablishing one's freedom when threatened [
Chatbots may differ in expressing empathy/autonomy support, whose influence on the user response appears complex. The Computers as Social Actors (CASA) approach [
The existing evidence on the consequences of chatbot empathy provides no conclusion regarding user reactions to vaccine promotion chatbots, including CPI and chatbot persuasiveness (ie, the degree to which user vaccination intent changes after the conversation with the chatbot). Furthermore, although previous studies suggest that the way of empathizing (ie, too “humanlike” expression) may lead to a negative response, it is unclear how the response to vaccination chatbot empathy depends on the context related to the conversation, such as the conversation quality and chatbot expertise cues (eg, the chatbot’s endorser credibility). Conversation quality is the actual quality of the information provided by a chatbot to specific users' questions, which is considered a key component of chatbot performance [
We propose that the effect of vaccination chatbot empathy/autonomy support expression on user response may be moderated by the conversation quality. In general, the quality of chatbot performance consists of how well a chatbot matches its predefined responses with the user’s request and how well the set of the chatbot’s predefined responses covers the requests of its users [
Hypothesis (H)1: The relationship between the user response to COVID-19 vaccination chatbots (ie, CPI and chatbot persuasiveness in terms of COVID-19 vaccination) and their expressed empathy/autonomy support is less positive (or more negative) when the conversation quality is lower.
Chatbot endorser credibility may be an additional moderator of the chatbot empathy/autonomy support expression effect. Namely, a more credible endorser (ie, expert vs layperson) may act as an expertise cue and therefore produce a higher level of initial trust toward the chatbot [
H2: The relationship between the user response to COVID-19 vaccination chatbots (ie, CPI and chatbot persuasiveness in terms of COVID-19 vaccination) and their expressed empathy/autonomy support is more positive (or less negative) when the chatbot is endorsed by an expert (vs a layperson).
Our experimental study aims to test user reactions to COVID-19 vaccination chatbots with different levels of expressed empathy/autonomy support, examining the proposed moderating role of conversation quality and chatbot endorser credibility.
In total, 211 Dutch-speaking adults living in Belgium (aged 18-60 years) and not vaccinated (yet) participated in an online experiment. The participants were panel respondents of the market research company Bilendi, Belgium. Their panel was recruited through various sources, such as the Bilendi website, banner media websites, member-get-member, third-party databases, and offline recruitment. To join the panel, respondents had to complete a sociodemographic questionnaire, approve a double opt-in where they confirmed they wanted to participate in Bilendi’s surveys, and accept their privacy policy. The participants were approached for the survey through personal email invitations and reminders. To thank the respondents for their contribution and encourage them to complete the online experiment, Bilendi granted points that the respondents could save up and swap for a prize later. The data collection was conducted in 2021, between May 28 and June 3, while daily new cases in Belgium were around 1400 [
Sample characteristics (N=196).
Characteristics | Total sample, n (%) | Present empathy/autonomy support expression, n (%) | Absent empathy/autonomy support expression, n (%) | |
|
||||
|
Women | 112 (57.1) | 64 (63.4) | 48 (50.5) |
|
Men | 84 (42.9) | 37 (36.6) | 47 (49.5) |
|
||||
|
<26 | 14 (7.1) | 8 (7.9) | 6 (6.3) |
|
26-35 | 60 (30.6) | 37 (36.6) | 23 (24.2) |
|
36-45 | 64 (32.7) | 32 (31.7) | 32 (33.7) |
|
46-55 | 55 (28.1) | 24 (23.8) | 31 (32.6) |
|
>55 | 3 (1.5) | N/Aa | 3 (3.2) |
|
||||
|
Primary | 1 (0.5) | 1 (1.0) | N/A |
|
Secondary | 40 (20.4) | 13 (12.9) | 27 (28.4) |
|
Bachelor’s degree | 81 (41.3) | 44 (43.6) | 37 (38.9) |
|
Master’s degree | 71 (36.2) | 41 (40.6) | 30 (31.6) |
|
Doctoral degree | 3 (1.5) | 2 (2.0) | 1 (1.1) |
aN/A: not applicable.
As experimental stimuli, we used VaccinChat, a chatbot providing COVID-19 vaccination information. VaccinChat was developed by the CLiPS (Computational Linguistics, Psycholinguistics and Sociolinguistics) research center associated with the Department of Linguistics of the Faculty of Arts of the University of Antwerp, Belgium. The chatbot is a RASA [
The experiment had a 2 (chatbot empathy/autonomy support expression: present vs absent) × 2 (endorser credibility: expert vs layperson) between-subject design. The participants were randomly assigned to the experimental conditions, however, striving for equal gender distribution. The final sample sizes were Npresent=101 and Nabsent=95 across the empathy/autonomy support expression condition and Nexpert=111 and Nlayperson=85 across the endorser credibility condition (experimental cell sizes: Nempathy/autonomy present, expert=57, Nempathy/autonomy present, layperson=44, Nempathy/autonomy absent, expert=54, Nempathy/autonomy absent, layperson=41). No significant differences in initial vaccination intention occurred across the experimental condition (empathy/autonomy support expression condition:
The participants first received an invitation mail with the survey link randomly redirecting them to the 4 experimental conditions. After the participant provided informed consent, the preintervention survey started with demographics and a question on COVID-19 vaccination status, initial COVID-19 vaccination intent, and personal experience with COVID-19. Next, the participants were informed about the chatbot, they received the related instructions and time recommendations, and they were redirected to the chatbot and asked to start the conversation. We encouraged the participants to submit their questions and concerns about COVID-19 vaccinations. The instruction was as follows:
In a moment, you will be asked to start a conversation with the chatbot. A chatbot is a kind of computer program. There is no real person behind the chat. The conversation will open in a new tab. We are interested in your experience with the chatbot. The idea is that you will submit any questions or concerns you may have about the COVID-19 vaccines and vaccination to the chatbot. These questions can be very practical but can also be about genuine concerns or serious vaccine doubts. If the chatbot does not understand your question, please try to word your question differently. We ask you to have a conversation with the chatbot for a minimum of 3 minutes and a maximum of 5 minutes.
Before starting a conversation with the chatbot, the participants read a brief endorsement claiming, “The chatbot gives a clear answer to the questions and concerns about the COVID-19 vaccines,” and that the chatbot is “particularly on point.” In the expert endorser condition, this endorsement was provided by “the well-known vaccinologist, Prof. Dr. Pierre Van Damme.” In the layperson endorsement condition, the endorsers were described as “a group of Flemings with questions about vaccines, including vaccine doubters.”
The chatbot worked differently depending on the empathy/autonomy support expression condition. Namely, in the present empathy/autonomy support expression condition, the chatbot displayed additional phrases to express its empathy and autonomy support. Specifically, the chatbot showed understanding of the user’s concerns (eg, “I understand your concern. This is ultimately about your own body.”), showed appreciation for the user’s knowledge (eg, “Good that you are aware of this.”), supported the user’s autonomy regarding the decision to get vaccinated (eg, “Whether or not you vaccinate your children, when possible, remains, of course, a personal choice.”), showed interest in the user’s situation (eg, “I hope with you that we can soon pick up life again in all its glory.”), and comforted the user by referring to the social context (eg, “There are many people who are currently facing this question.”). See all phrases appearing in the predefined set of the chatbot's answers in
Empathy/autonomy support expression phrases in the predefined set of the chatbot's answers (the present empathy/autonomy-support expression condition).
Way of expressing empathy/autonomy support | Phrases in the predefined set of the chatbot's answers (listed in alphabetical order) |
Showing an understanding of the user’s concerns |
“I understand your concern. This is ultimately about your own body.” “It is normal that you are curious about the differences. It is even good that you are aware of this.” “It is quite normal that you would prefer to select your vaccine yourself. Maybe you still have certain questions about some vaccines? Do not hesitate to ask them.” “We understand your concerns about possible side effects of a vaccine.” |
Showing appreciation for the user |
“Good question!” “Good that you are aware of this.” “Good that you are checking this out. I will try to make it clear to you below.” “Good that you are informing yourself about the different vaccines currently being used.” “Good that you are inquiring about this.” “How nice to read that you want to inform yourself well about the moment when it is your turn for a vaccine.” “So it's good that you think about this.” |
Expressing autonomy support regarding the decision to get vaccinated |
“Getting vaccinated, therefore, remains entirely your own choice. List for yourself which considerations are important to you. This can help you choose whether or not you want to vaccinate.” “Whether or not you vaccinate your children, when possible, remains, of course, a personal choice.” “I am happy to help you find reliable information about vaccines. The final choice to get vaccinated is entirely up to you.” |
Showing interest in the user’s situation |
“I hope you can get your vaccine soon, if desired!” “I hope with you that we can soon pick up life again in all its glory.” |
Comforting the user by referring to the social context |
“There are many people who are currently facing this question.” |
These phrases (
The empathy/autonomy support phrases were attached to predefined chatbot answers (called adapted answers). The occurrence of those answers in a conversation depended on the questions asked by each participant. In the actual conversations made with the participants in our study, the mean share of the adapted answers (ie, the ones belonging to the predefined set of the chatbot's answers differing between the high and low empathy/autonomy support expression condition) was 63% (SD 0.2349). To ensure that the share of adapted answers for empathy/autonomy support expression at the participant level would not disturb the tested effects, we involved it as an additional covariate in all analyses.
The most frequently asked questions were similar to those reported by Buhmann et al [
After their conversation with the chatbot, the participants had to use the conversation code they received from the chatbot at the beginning of the conversation to start the postintervention survey. Based on that code, we were able to link their answers to their IDs. Next, we measured the participants’ perceived autonomy during the conversation, CPI, and the intention to get vaccinated against COVID-19. They were debriefed that the chatbot gave different responses to different participants, but the information provided by the chatbot did not contain any factual falsehoods. Finally, the participants were asked for feedback, thanked, and received monetary compensation.
The initial COVID-19 vaccination intention was measured right after the question about the COVID-19 vaccination status with the item “If you had to make a choice now, will you get vaccinated or not, as soon as the opportunity presents itself?” The after-conversation intention was measured with the item “After talking to the chatbot, will you get vaccinated against COVID-19 as soon as you get an invitation?” Both items used a 6-point response format ranging from 1 (very unlikely/certainly not) to 6 (very likely/certainly so). The vaccination intention shift (VIS) was determined by subtracting the initial COVID-19 intention from the after-conversation intention and coded from –5 (indicating the highest-possible decrease) to 5 (indicating the highest-possible increase).
Here, 10 items adapted from Smit et al [
Measurement scales.
Scale | Item | Reliability, α | Reference |
PUAa |
“I feel that the conversation with the chatbot has given me choices and options.” “I felt understood during the conversation with the chatbot.” “I felt that the chatbot was not judging me during the conversation.” “The conversation with the chatbot encouraged me to look for answers to my questions.” “I have great confidence in the chatbot.” “The chatbot answered my questions completely and carefully.” “During the conversation with the chatbot, I was able to give input on how I would like to do things myself.” “The chatbot took into account my emotions in the answers and advice given.” “I felt that the chatbot cares about me as a person.” “The chatbot tried to include how I see things in the advice given.” |
.905 | Adapted from Smit et al [ |
CPIb |
“I am willing to submit questions I have about vaccines to the chatbot.” “The likelihood that I would ask the questions I have about vaccines to the chatbot is high.” “I am willing to recommend the chatbot to friends.” |
.921 | Similar to Zarouali et al [ |
aPUA: perceived user autonomy.
bCPI: chatbot patronage intention.
Here, 3 items measured the patronage intention toward the chatbot (CPI; α=.921). Similar items have been frequently used in previous research (eg, [
In addition to the respondents' perceptions, we analyzed the quality of actual answers provided by the chatbot for each respondent using the following metrics based on conversation logs:
Conversation accuracy (CA): The percentage of correct answers in the conversation (an answer's correctness was determined manually by assessing the relevance of the answer provided by the chatbot to the user's question). The mean CA was 53.0% (SD 0.24295); 100% CA was reached for 9.2% (n=18%) of the participants, and it exceeded 50% for 45.4% (n=89) of the participants.
Conversation-in-database (CD): The percentage of answers in the conversation that were exactly the same as questions that were already included in the chatbot training data. The mean CD was 5.9% (SD 0.11294); for 70.4% (n=138) of the participants, the CD was 0% (no exact answers), and it did not exceed 50% for the other participants.
CF: The percentage of answers in the conversation that took the form of ”I don't understand,“ followed by a request to rephrase the question. The mean CF was 4.0% (SD 0.08979); the CF was 0% for 77.6% (n=152) of the participants (no fallbacks), and it did not exceed 50% for 99.5% (n=195) of the participants.
Conversation wrong-answer (WN): The percentage of user questions in the conversation for which the chatbot did not have a suitable answer and provided an answer that corresponded to another question. The mean WN was 12.5% (SD 0.17254); the WN was 0% for 52.6% (n=103) of the participants (no wrong answers), and it did not exceed 50% for 97.4% (n=191) of the participants.
Most errors could be attributed to questions that were not yet included in the chatbot at the moment of the intervention. For example, a number of participants asked questions about the possibility of a third vaccination round because this had been reported in the media. The chatbot, however, had not been updated to deal with this information at that time.
The Ethics Committee for the Social Sciences and Humanities of the University of Antwerp, Belgium (code of ethical approval: SHW_21_57) approved this research. Prior to the study, all respondents signed the informed consent form. All data were anonymously collected; we only kept track of an assigned response ID. All human experimentation guidelines were respected. Our participants received monetary compensation for their contribution to our online experiment.
First, we ran 2 ANCOVAs to check the main effects of chatbot empathy/autonomy support expression on chatbot evaluation separately for PUA and CPI as dependent variables. The empathy/autonomy support expression condition (present vs absent) and the endorser credibility condition (expert vs layperson) were factors, and the share of adapted answers for empathy/autonomy support expression, gender, age, education, and initial vaccination intention were covariates. No significant manipulation effects occurred under the following conditions:
PUA under the empathy/autonomy support expression condition: meanpresent 2.935, SD 0.747; meanabsent 3.045, SD 0.864;
PUA under the endorser credibility condition: meanexpert 2.983, SD 0.858; meanlayperson 2.996, SD 0.736;
CPI under the empathy/autonomy support condition: meanpresent 3.150, SD 1.152; meanabsent 3.450, SD 1.289;
CPI under the endorser credibility condition: meanexpert 3.293, SD 1.233; meanlayperson 3.306, SD 1.223;
In addition, the 2 manipulations had no interaction effects (for PUA:
Next, we checked the role of actual answers provided by the chatbot in the relationship between the empathy/autonomy support expression and chatbot evaluation. Specifically, we checked the moderation effects of conversational quality variables. We ran a series of moderation analyses (PROCESS macro [
We used CF as a moderator in further analysis. Aiming to check the consequences of the CF × empathy/autonomy support expression interaction on behavioral outcomes, we ran 2 first-stage moderated mediation analyses (PROCESS macro [
In the first analysis, CPI served as a dependent variable (
In the second analysis, VIS served as a dependent variable (
Moderation effects of CF in the relationship between chatbot empathy/autonomy support expression and PUA.
Visualization of conditional effects of chatbot empathy/autonomy support expression on PUA (CF is the moderator). CF: conversation fallback.
Johnson-Neyman output for the moderation effect of CFa in the relationship between chatbot empathy/autonomy support expression (independent variable; 1=present, 0=absent) and PUAb (dependent variable).
CF | Effect | SE |
|
95% CI | |
–0.0399 | 0.0310 | 0.1232 | 0.2517 | .80 | –0.2121 to 0.2742 |
–0.0121 | –0.0623 | 0.1138 | –0.5472 | .59 | –0.2867 to 0.1622 |
0.0157 | –0.1555 | 0.1143 | –1.3613 | .18 | –0.3810 to 0.0699 |
0.0421 | –0.2445 | 0.1239 | –1.9728 | .05 | –0.4889 to 0 |
0.0434 | –0.2488 | 0.1246 | –1.9972 | .047 | –0.4946 to –0.0030 |
0.0712 | –0.3421 | 0.1426 | –2.3985 | .02 | –0.6235 to –0.0607 |
0.099 | –0.4354 | 0.1659 | –2.6245 | .009 | –0.7627 to –0.1081 |
0.1268 | –0.5287 | 0.1925 | –2.7465 | .007 | –0.9085 to –0.1489 |
0.1545 | –0.622 | 0.2212 | –2.8114 | .006 | –1.0584 to –0.1855 |
0.1823 | –0.7153 | 0.2514 | –2.8455 | .005 | –1.2112 to –0.2194 |
0.2101 | –0.8086 | 0.2825 | –2.8625 | .005 | –1.3658 to –0.2513 |
0.2379 | –0.9018 | 0.3142 | –2.8700 | .005 | –1.5217 to –0.2819 |
0.2657 | –0.9951 | 0.3465 | –2.8721 | .005 | –1.6787 to –0.3116 |
0.2934 | –1.0884 | 0.3791 | –2.8711 | .005 | –1.8363 to –0.3405 |
0.3212 | –1.1817 | 0.4120 | –2.8683 | .005 | –1.9945 to –0.3689 |
0.349 | –1.2750 | 0.4451 | –2.8645 | .005 | –2.1531 to –0.3969 |
0.3768 | –1.3683 | 0.4784 | –2.8603 | .005 | –2.3120 to –0.4245 |
0.4045 | –1.4616 | 0.5118 | –2.8558 | .005 | –2.4712 to –0.4519 |
0.4323 | –1.5548 | 0.5453 | –2.8513 | .005 | –2.6306 to –0.4791 |
0.4601 | –1.6481 | 0.5789 | –2.8469 | .005 | –2.7902 to –0.5060 |
0.4879 | –1.7414 | 0.6126 | –2.8425 | .005 | –2.9500 to –0.5328 |
0.5157 | –1.8347 | 0.6464 | –2.8384 | .005 | –3.1099 to –0.5595 |
aCF: conversation fallback.
bPUA: perceived user autonomy.
Moderated mediation effects on CPI. CPI: chat patronage intention.
Moderated mediation effects on VIS. VIS: vaccination intention shift.
The aim of our study was to improve the understanding of the role of the conversation-related context in the effects of expressing empathy/autonomy support using COVID-19 vaccination chatbots. Specifically, we checked the moderating role of 2 conversation context factors (ie, conversation quality and chatbot expertise cues). The results of our experiment suggest that chatbot empathy/autonomy support expression may harm user evaluation and behavioral intentions related to a chatbot and diminish its usefulness in promoting COVID-19 vaccinations when the conversation quality is low (as indicated by the higher levels of CF). However, we did not find the effects of chatbot expertise cues manipulated through chatbot endorser credibility.
We demonstrated that the effect of expressing empathy/autonomy support using the COVID-19 vaccination chatbot on PUA was negative for a high level of CF, while being nonsignificant for its mean and low levels. Moreover, we showed that this moderation effect has consequences on CPI and chatbot persuasiveness. The indirect effects of empathy/autonomy support expression on CPI and VIS through PUA were negative for a high level of CF, while being nonsignificant for its mean and low levels. These results add to the growing literature on the user response to chatbots (eg, [
CF (ie, the chatbot replying ”I do not understand“) may represent the deficiency of actual chatbot empathy and, in turn, its limited support for user autonomy as the chatbot fails to resolve the request of the user. As such, higher levels of CF may indicate to the chatbot users that the chatbot's actual answers lack empathy and support for user autonomy. Considering this, the negative interaction of chatbot empathy/autonomy support expression and CF that we revealed is in line with the notion that chatbot anthropomorphism (here, in the form of empathy/autonomy support expression) may increase user expectations and sensitiveness for the quality of interaction [
In terms of Sundar's [
No interaction effects of expressing empathy/autonomy support and the chatbot’s endorser credibility might be caused by our experimental setting, which involved extensive real interactions with the chatbot. In this case, the user response might be more influenced by the actual quality conversation, and therefore, expertise cues were discounted. Interestingly, endorser credibility also had no significant main effect on the user response.
Certain limitations of our study call for further research. First, our study used the Dutch-speaking population of Belgian adults who are demonstrated to have relatively positive attitudes toward vaccinations (eg, 84.0% agreement with ”Vaccines are effective“ [
Second, we were limited to a specific stage of the COVID-19 pandemic (ie, about 1 year after introducing the vaccines) and a specific preventable illness. Further research can extend the investigation of chatbot empathy/autonomy support effectiveness in later pandemic stages (eg, when COVID-19 became endemic) and other preventable illnesses (eg, influenza).
Third, we conceptualized and operationalized chatbot empathy and autonomy support together. Therefore, future studies examining the effects of chatbot empathy/autonomy -support expression should distinguish between those 2 constructs in experimental manipulation and measurements. Next, further studies may investigate deeper the mechanisms underlying the negative effects of the interaction of empathy/autonomy support expression and conversation quality, which were demonstrated in this paper. Specifically, whether those effects result from chatbot empathy being more likely to evoke an eerieness perception (as considered in UVM theory [
Fourth, we measured conversation quality instead of manipulating it. Although this approach enhances realism, as the quality of answers provided by chatbots may heavily depend on the interactions with their users, manipulating this variable should provide better causal evidence of its moderating role. Therefore, future experiments can compare conditions differing in the percentage of questions a chatbot fails to answer.
Lastly, our null result regarding the moderating role of expertise cues in the effect of expressing empathy autonomy support may result from our experimental setting, in which the participants were involved in extensive interaction with the chatbot. Perhaps, in the case of less engaging interactions, expertise cues play a larger role. This possibility should be examined in further research.
Our study underlined the pitfalls of making vaccination chatbots empathetic in specific conversation-related contexts, namely too low conversation quality. The results may guide vaccine policy makers and chatbot developers to be careful with communicating chatbot empathy/autonomy support unless its capability to address user questions is firmly proven. Therefore, introducing a chatbot's empathy/autonomy support expression should be preceded by assessing the quality of answers provided by the chatbot. For example, in their early stage of development, chatbots with a high level of CF should hesitate to express their empathy and support for user autonomy. However, it is difficult to keep up with information flow manually when information changes quickly, like in a pandemic. Consequently, there may always be user questions for which the chatbot does not yet know appropriate answers, ending up with fallback. Automatic text-mining tools or a model, such as ChatGPT, could be used to scrape new information from the web more easily. In addition, an instruction model, such as ChatGPT, could provide a multitude of exemplary user questions. These could be used as extra training data, speeding up and improving the chatbot’s continuous growth, thus avoiding or reducing the number of fallback answers and, eventually, allowing the effective use of chatbot empathy/autonomy support expression.
artificial intelligence
conversation accuracy
Computers as Social Actors
conversation-in-database
conversation fallback
chatbot patronage intention
perceived user autonomy
Uncanny Valley of Mind
variance inflation factor
vaccination intention shift
conversation wrong-answer
This research received funding from (1) the Flemish government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” program and (2) the King Baudoin Foundation of Belgium.
The data set generated and analyzed during this study is available in the Open Science Framework, the repository of the Center for Open Science [
KP was the project's principal investigator. KP supervised the study design, data collection, analysis, and manuscript development. KP, TC, and JB were responsible for the study design and data collection. JB and WD developed the technical part of the experimental tool. ADW, GH, and PVD developed the content part of the experimental tool. WT (corresponding author) worked on data analysis and manuscript writing. TC and JB contributed to data processing, analysis, and writing of the Methods section of the manuscript. TC, JB, ADW, GH, PVD, WD, and KP reviewed the manuscript.
None declared.