Background: Artificial intelligence-powered voice assistants (VAs), such as Apple Siri, Google Assistant, and Amazon Alexa, interact with users in natural language and are capable of responding to simple commands, searching the internet, and answering questions. Despite being an increasingly popular way for the public to access health information, VAs could be a source of ambiguous or potentially biased information.
Objective: In response to the ongoing prevalence of vaccine misinformation and disinformation, this study aims to evaluate how smartphone VAs respond to information- and recommendation-seeking inquiries regarding the COVID-19 vaccine.
Methods: A national cross-sectional survey of English-speaking adults who owned a smartphone with a VA installed was conducted online from April 22 to 28, 2021. The primary outcomes were the VAs’ responses to 2 questions: “Should I get the COVID vaccine?” and “Is the COVID vaccine safe?” Directed content analysis was used to assign a negative, neutral, or positive connotation to each response and website title provided by the VAs. Statistical significance was assessed using the t test (parametric) or Mann-Whitney U (nonparametric) test for continuous variables and the chi-square or Fisher exact test for categorical variables.
Results: Of the 466 survey respondents included in the final analysis, 404 (86.7%) used Apple Siri, 53 (11.4%) used Google Assistant, and 9 (1.9%) used Amazon Alexa. In response to the question “Is the COVID vaccine safe?” 419 (89.9%) users received a direct response, of which 408 (97.3%) had a positive connotation encouraging users to get vaccinated. Of the websites presented, only 5.3% (11/207) had a positive connotation and 94.7% (196/207) had a neutral connotation. In response to the question “Should I get the COVID vaccine?” 93.1% (434/466) of users received a list of websites, of which 91.5% (1155/1262) had a neutral connotation. For both COVID-19 vaccine–related questions, there was no association between the connotation of a response and the age, gender, zip code, race or ethnicity, and education level of the respondent.
Conclusions: Our study found that VAs were much more likely to respond directly with positive connotations to the question “Is the COVID vaccine safe?” but not respond directly and provide a list of websites with neutral connotations to the question “Should I get the COVID vaccine?” To our knowledge, this is the first study to evaluate how VAs respond to both information- and recommendation-seeking inquiries regarding the COVID-19 vaccine. These findings add to our growing understanding of both the opportunities and pitfalls of VAs in supporting public health information dissemination.
Artificial intelligence–powered voice assistants (VAs), such as Apple Siri, Google Assistant, and Amazon Alexa, interact with users in natural language and are capable of responding to simple commands, searching the internet, and answering questions. Globally, 27% of the population used voice search in 2018 . In the United States, approximately 50 million homes contain a VA and nearly two-thirds of surveyed users reported using their VA to seek information, including answering health questions [ ].
In 2020, the COVID-19 pandemic precipitated a dramatic shift in health care delivery across the United States, including increased demand for and reliance on digital technology solutions to provide health and safety information. To date, a number of health care institutions have adopted VAs to augment pandemic response efforts or increase clinical capacity. Hospitals in Boston, Ohio, and Minnesota have used VAs to provide users with public health guidelines, news, and other medically relevant communications .
The US Food and Drug Administration’s emergency use approval of the first COVID-19 vaccine in December 2020 represented an inflection point in the trajectory of the pandemic and a unique opportunity to study the usefulness of VAs in supporting health information communication. Vaccine hesitancy attributed to misinformation and disinformation remains a significant barrier to vaccine uptake . Findings from a US national survey published in July 2021 revealed that adults who believe the COVID-19 vaccine is unsafe are less willing to receive the vaccine, know less about the virus, and are more likely to believe vaccine myths [ ].
Given the increasing use of VAs by individuals and health care institutions to obtain and provide health information [, ], VAs represent a tool that could, in theory, support the dissemination of evidence-based vaccine information. However, the literature assessing the reliability of health information provided by VAs is limited, and recent studies have found VAs provide users with health information that are inaccurate or incongruous with official recommendations [ - ]. For example, a content analysis of Amazon Alexa’s responses to common pregnancy questions during the pandemic revealed that the majority (52%) of Alexa’s responses were not evidence based [ ]. One study found that VAs respond inconsistently and incompletely to questions about mental health and interpersonal violence [ ]. Another study found that among 70 addiction help-seeking queries presented to VAs, only 2 linked to remote treatment or treatment referral programs [ ]. Moreover, research assessing VAs’ capacity to integrate accurate information into a direct recommendation back to the user is scarce [ ].
In response to the ongoing prevalence of vaccine misinformation and disinformation, this study aims to evaluate how different VAs respond to COVID-19 vaccine–related questions. To our knowledge, this is the first study to evaluate how VAs respond to both information- and recommendation-seeking inquiries regarding the COVID-19 vaccine.
Study Design and Participants
A national cross-sectional web-based survey was conducted over a one-week period from April 22 to April 28, 2021, among English-speaking adult smartphone users living in the United States. The study was timed to the Biden Administration’s announcement that 90% of the adult US population would be eligible for vaccination and 90% would have a vaccination site within 5 miles of home by April 19, 2021 . We used a web-based snowball sampling strategy to recruit participants and employed several approaches to broaden our reach in the initial survey distribution. The approaches included (1) sending direct emails to individuals within the investigators’ professional and social networks, (2) posting on social media accounts, and (3) advertising through public mailing lists at Stanford University School of Medicine. Participants were asked to forward the survey link to their own social networks using email or social media [ ].
All participants who accessed the survey were invited to fill out a 10-item questionnaire that asked them to provide the following: (1) information about their personal smartphone device and software (eg, manufacturer, phone model, operating system version, and voice assistant); (2) demographics (eg, age, gender, zip code, race or ethnicity, and education level); (3) personal experiences with COVID-19 and intention to receive the vaccine (eg, if they or anyone in their close social circle have been diagnosed with COVID-19, vaccination status, or reasons for not intending to get the vaccine); (4) information about the responses displayed by their VAs to the questions “Should I get the COVID vaccine?” and “Is the COVID vaccine safe?” Participants were excluded if they were less than 18 years of age, not fluent in English, not located in the United States, or did not have access to a smartphone with a VA installed.
The primary outcomes were the responses of the VAs to the 2 questions about the COVID-19 vaccine. The survey instructed participants to ask their VA the 2 questions verbatim. We recommended that participants upload screenshots to provide information about VA responses, but we also provided an option allowing manual entry of information to accommodate those who were less tech-savvy. However, during data cleaning, we found that information provided through manual entry was largely incomplete and thus unreliable, so we excluded these data from the final analysis. At the end of the study period, we reviewed all of the survey responses and completed the following data cleaning process: (1) removed survey responses containing manual entry of VA responses; (2) removed incomplete survey responses containing screenshots or image files that did not display relevant information; (3) removed duplicate survey responses, keeping only the first survey response submitted by the user.
Through the screenshots, we ascertained whether the VA responded to the user by providing a direct response in the form of sentences, a list of websites, or a combination of the two. From the screenshots, we transcribed VAs’ direct responses as well as the titles of websites that the VA displayed verbatim. We transcribed up to the top 3 website titles displayed; as research shows, these results get 75% of all clicks . Full-text analysis was not within the scope of this project. Two investigators then independently followed a directed content analysis approach [ ], assessing each response and website title and assigning it a negative, neutral, or positive connotation. Negative responses were the ones that explicitly discouraged COVID-19 vaccination, actively cast doubt on the efficacy or safety of the vaccine, or otherwise took an active stance against vaccination. Positive responses were the ones that explicitly encouraged COVID-19 vaccination, clearly affirmed the efficacy and safety of the vaccine, or otherwise took an active stance in favor of vaccination. Neutral responses were the ones that neither encouraged nor discouraged COVID-19 vaccination, made no comment on the efficacy or safety of the vaccine, and did not explicitly take a clear stance for or against vaccination. Discrepancies in connotation assignment were resolved through discussion with a third investigator until consensus was reached.
Continuous variables were compared using t test (parametric) or Mann-Whitney U (nonparametric) test. Categorical variables were compared using the chi-square or Fisher exact test. All of the statistical analyses were performed using R (version 3.20; R Core Team).
The Stanford University Institutional Review Board approved this study (protocol IRB-60731). Consent was obtained on the first page of the web-based survey. Upon completion of the survey, respondents received a US $10 gift card link. All study data are anonymous; information was recorded in such a manner that participants’ identities cannot readily be ascertained, directly or through linked identifiers.
There were 1362 responses to our initial survey; 896 survey responses were excluded for being manual entry, incomplete, or duplicate responses (eg, one user completing the survey multiple times;). Our final analysis included 466 unique survey responses; 404 (86.7%) of our respondents used Apple Siri as their VA, 53 (11.4%) used Google Assistant, and 9 (1.9%) used Amazon Alexa. Demographic data of respondents are shown in .
In response to the question “Should I get the COVID vaccine?” 32 (6.9%) users received a direct response from their VA, while the rest (434/466, 93.1%) received a list of websites. None of the direct responses provided by VAs had a negative connotation; 9 of 32 (28.1%) users, all using Amazon Alexa, received a neutral response that recommended contacting the local health department; the rest of the users (23/32, 71.9%) received a positive response that encouraged vaccination. Users who did not receive a direct response were provided up to 3 websites to get more information. There were 69 unique websites presented a total of 1262 times. Overall, 8.5% (107/1262) of websites presented had a positive connotation, 91.5% (1155/1262) had a neutral connotation, and none had a negative connotation ().
|Respondent demographics||Values, n (%)|
|Nonbinary or unknown||8 (2)|
|Latinx or Hispanic||22 (5)|
|African American or Black||15 (3)|
|Other or unknown||44 (9)|
|High school or less||38 (8)|
|Some college or college graduate||328 (71)|
|Postgraduate training||90 (19)|
|Voice assistant used|
|Apple Siri||404 (87)|
|Google Assistant||53 (11)|
|Amazon Alexa||9 (2)|
|Intends to receive or has received the COVID-19 vaccine|
In response to the question “Is the COVID vaccine safe?” 419 (89.9%) users received a direct response from their VA, while the rest (47/466, 10.1%) received a list of websites. None of the direct responses provided by VAs had a negative connotation; 97.3% (408/419) of the responses received had a positive connotation and encouraged users to get vaccinated, and 2.6% (11/419), all using Google Assistant, had a neutral connotation. Users who did not receive a direct response were provided up to 3 websites to get more information. There were 53 unique websites presented a total of 207 times. Overall, 5.3% (11/207) of websites presented had a positive connotation, 94.7% (196/207) had a neutral connotation, and none had a negative connotation ().
highlights examples of direct responses provided by VAs, while highlights examples of websites provided to users who did not receive a direct response. For both COVID-19 vaccine–related questions, there was no association between the connotation of a response and the age (question 1: P=.51; question 2: P=.33), gender (question 1: P=.96; question 2: P=.72), zip code (question 1: P=.95; question 2: P=.27), race or ethnicity (question 1: P=.84; question 2: P=.86), or education level (question 1: P=.14; question 2: P=.54) of the respondent. Given the small sample size of Google Assistant and Amazon Alexa users, tests of significance were not run between the 3 VAs.
We found that the VAs were much more likely to respond directly to information-seeking questions (“Is the COVID vaccine safe?”) than to recommendation-seeking questions (“Should I get the COVID vaccine?”). Direct responses were more likely to have a positive connotation, and most VAs that provided a direct response gave the recommendation that the user should be vaccinated. Importantly, in no instance did VAs outright respond that vaccines were unsafe and should be avoided, even though a significant portion of Americans hold that belief . Nevertheless, the neutral responses provided by VAs did not explicitly highlight the safety of vaccines and may leave room for doubt if a user is already skeptical of vaccination.
Compared to the direct responses, the websites provided by VAs were much less likely to be outright supportive of vaccination, although importantly, no website had an explicitly negative connotation. Still, many of the websites with neutral connotations left room for doubt, with titles that highlighted adverse events (eg, “How long should COVID vaccine side effects last?”), implied a degree of social stigma around vaccination (eg, “Should you tell people you got the COVID-19 vaccine? Here’s what to consider”), or focused on contraindications to vaccination (eg, “health questions to consider before taking the COVID-19 vaccine”).
As society progresses further into the digital age, the way health information is presented to and consumed by the public is changing. Over the past decade, we have seen how health information is increasingly disseminated through the internet and social media, with significant implications for public safety. This has been especially relevant during the COVID-19 pandemic, where efforts to educate the public around safety measures, including masking and vaccination, have been fraught by the spread of misinformation and disinformation online.
Previous studies have highlighted pitfalls associated with the use of VAs as sources of health information, which often offer counsel that is inaccurate or incongruous with official recommendations [- ]. Our study adds to this body of work by examining the responses of VAs to questions about the COVID-19 vaccine. This work is novel in that it explores the responses of VAs to information-seeking questions that can be answered by medical evidence (eg, “Is the COVID vaccine safe?”) and it also asks them to respond to recommendation-seeking questions (eg, “Should I get the COVID vaccine?”).
Our study is limited by the exclusion of many survey responses due to manual entry and incomplete or duplicate responses, suggesting potential for improvements in data collection in the future. Unfortunately, this is a risk inherent in survey-based research conducted on the internet (eg, increasing prevalence of survey bots). To mitigate this risk, substantial effort was undertaken to clean and ensure the validity and trustworthiness of the data that was included in the final analysis. Additional limitations include the overrepresentation of Apple devices using Siri, which may skew the types of responses we received, and the large percentage of respondents who were from coastal regions, highly educated and supportive of COVID-19 vaccination. It is possible that VAs could tailor their responses based on their users’ demographics and search history, and future work should aim to address this question by including respondents of diverse backgrounds. Lastly, our finding that VAs were more likely to respond directly to an information-seeking query than to a recommendation-seeking query was based on 2 COVID-19–related questions. Future work should investigate whether this behavior of VAs is generalizable to other health-related questions.
Despite being an increasingly popular way for the public to access health information, current state VAs could be a source of ambiguous or potentially biased information. Our study found that VAs were much more likely to respond directly with positive connotations to the question “Is the COVID vaccine safe?” but not respond directly and provide a list of websites with neutral connotations to the question “Should I get the COVID vaccine?” These findings add to our growing understanding of both the opportunities and pitfalls of VAs in supporting public health information dissemination, warranting further evaluation by public health professionals, technologists, and policymakers.
This work was supported by the Division of Primary Care and Population Health, Department of Medicine, Stanford University School of Medicine.
The data sets generated and analyzed during this study are available from the corresponding author on reasonable request.
Conflicts of Interest
SL is the principal investigator on a Google-sponsored research project administered by Stanford University, which is not related to this work. Neither he nor members of his family have financial interests in Google. None of the authors have any financial interests to declare.
- Voice search: a deep-dive into the consumer uptake of the voice assistant technology. Global Web Index. 2018. URL: https://www.gwi.com/hubfs/Downloads/Voice-Search-report.pdf [accessed 2023-02-05]
- Merritt A. Here's how people really feel about their digital assistants. VentureBeat. URL: https://venturebeat.com/2017/11/21/heres-how-people-really-feel-about-their-digital-assistants/ [accessed 2023-02-05]
- Sezgin E, Huang Y, Ramtekkar U, Lin S. Readiness for voice assistants to support healthcare delivery during a health crisis and pandemic. NPJ Digit Med 2020 Sep 16;3(1):122 [FREE Full text] [CrossRef] [Medline]
- Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman ALJ, Recchia G, et al. Susceptibility to misinformation about COVID-19 around the world. R Soc Open Sci 2020 Oct 14;7(10):201199 [FREE Full text] [CrossRef] [Medline]
- Kricorian K, Civen R, Equils O. COVID-19 vaccine hesitancy: misinformation and perceptions of vaccine safety. Hum Vaccin Immunother 2022 Dec 31;18(1):1950504-1950508 [FREE Full text] [CrossRef] [Medline]
- Yang S, Lee J, Sezgin E, Bridge J, Lin S. Clinical advice by voice assistants on postpartum depression: cross-sectional investigation using apple Siri, amazon Alexa, google assistant, and Microsoft cortana. JMIR Mhealth Uhealth 2021 Jan 11;9(1):e24045 [FREE Full text] [CrossRef] [Medline]
- Ferrand J, Hockensmith R, Houghton RF, Walsh-Buhi ER. Evaluating smart assistant responses for accuracy and misinformation regarding human papillomavirus vaccination: content analysis study. J Med Internet Res 2020 Aug 03;22(8):e19018 [FREE Full text] [CrossRef] [Medline]
- Schindler-Ruwisch J, Palancia Esposito C. "Alexa, Am I pregnant?": A content analysis of a virtual assistant's responses to prenatal health questions during the COVID-19 pandemic. Patient Educ Couns 2021 Mar;104(3):460-463 [FREE Full text] [CrossRef] [Medline]
- Alagha EC, Helbing RR. Evaluating the quality of voice assistants' responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and Siri. BMJ Health Care Inform 2019 Nov 24;26(1):e100075. [CrossRef] [Medline]
- Miner AS, Milstein A, Schueller S, Hegde R, Mangurian C, Linos E. Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA Intern Med 2016 May 01;176(5):619-625 [FREE Full text] [CrossRef] [Medline]
- Nobles AL, Leas EC, Caputi TL, Zhu S, Strathdee SA, Ayers JW. Responses to addiction help-seeking from Alexa, Siri, Google Assistant, Cortana, and Bixby intelligent virtual assistants. NPJ Digit Med 2020;3:11 [FREE Full text] [CrossRef] [Medline]
- Kocaballi AB, Quiroz JC, Rezazadegan D, Berkovsky S, Magrabi F, Coiera E, et al. Responses of conversational agents to health and lifestyle prompts: investigation of appropriateness and presentation structures. J Med Internet Res 2020 Feb 09;22(2):e15823 [FREE Full text] [CrossRef] [Medline]
- Seródio Figueiredo CM, de Melo T, Goes R. Evaluating voice assistants' responses to COVID-19 vaccination in portuguese: quality assessment. JMIR Hum Factors 2022 Mar 21;9(1):e34674 [FREE Full text] [CrossRef] [Medline]
- Hong G, Folcarelli A, Less J, Wang C, Erbasi N, Lin S. Voice assistants and cancer screening: a comparison of Alexa, Siri, google assistant, and Cortana. Ann Fam Med 2021 Sep 13;19(5):447-449 [FREE Full text] [CrossRef] [Medline]
- FACT SHEET: president Biden announces 90% of the adult U.S. population will be eligible for vaccination and 90% will have a vaccination site within 5 miles of home by April 19. The White House. 2021. URL: https://tinyurl.com/3yh7u8ej [accessed 2023-02-05]
- Parker C, Scott S, Geddes A. Snowball sampling. SAGE Research Methods Foundations. 2019. URL: https://methods.sagepub.com/foundations/snowball-sampling [accessed 2023-02-05]
- Dean B. Here's what we learned about organic click through rate. Backlinko. URL: https://backlinko.com/google-ctr-stats [accessed 2023-02-05]
- Hsieh H, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005 Nov;15(9):1277-1288. [CrossRef] [Medline]
|VA: voice assistant|
Edited by A Mavragani; submitted 27.09.22; peer-reviewed by H Zhang; comments to author 04.01.23; revised version received 24.01.23; accepted 29.01.23; published 08.02.23Copyright
©Philip Sossenheimer, Grace Hong, Anna Devon-Sand, Steven Lin. Originally published in JMIR Formative Research (https://formative.jmir.org), 08.02.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.