Published on in Vol 6, No 6 (2022): June

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/33368, first published .
The Drivers of Acceptance of Artificial Intelligence–Powered Care Pathways Among Medical Professionals: Web-Based Survey Study

The Drivers of Acceptance of Artificial Intelligence–Powered Care Pathways Among Medical Professionals: Web-Based Survey Study

The Drivers of Acceptance of Artificial Intelligence–Powered Care Pathways Among Medical Professionals: Web-Based Survey Study

Original Paper

1Faculty of Science, Athena Institute, Vrije Universiteit Amsterdam, Amsterdam, Netherlands

2Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, Netherlands

3DEARhealth, Amsterdam, Netherlands

4Department of Gastroenterology & Hepatology, Leiden University Medical Center, Leiden, Netherlands

Corresponding Author:

Lisa Cornelissen, MSc

Faculty of Science

Athena Institute

Vrije Universiteit Amsterdam

Boelelaan 1105

Amsterdam, 1081HV

Netherlands

Phone: 31 655320046

Email: lisa.cornelissen@dearhealth.com


Background: The emergence of Artificial Intelligence (AI) has been proven beneficial in several health care areas. Nevertheless, the uptake of AI in health care delivery remains poor. Despite the fact that the acceptance of AI-based technologies among medical professionals is a key barrier to their implementation, knowledge about what informs such attitudes is scarce.

Objective: The aim of this study was to identify and examine factors that influence the acceptability of AI-based technologies among medical professionals.

Methods: A survey was developed based on the Unified Theory of Acceptance and Use of Technology model, which was extended by adding the predictor variables perceived trust, anxiety and innovativeness, and the moderator profession. The web-based survey was completed by 67 medical professionals in the Netherlands. The data were analyzed by performing a multiple linear regression analysis followed by a moderating analysis using the Hayes PROCESS macro (SPSS; version 26.0, IBM Corp).

Results: Multiple linear regression showed that the model explained 75.4% of the variance in the acceptance of AI-powered care pathways (adjusted R2=0.754; F9,0=22.548; P<.001). The variables medical performance expectancy (β=.465; P<.001), effort expectancy (β=–.215; P=.005), perceived trust (β=.221; P=.007), nonmedical performance expectancy (β=.172; P=.08), facilitating conditions (β=–.160; P=.005), and professional identity (β=.156; P=.06) were identified as significant predictors of acceptance. Social influence of patients (β=.042; P=.63), anxiety (β=.021; P=.84), and innovativeness (β=.078; P=.30) were not identified as significant predictors. A moderating effect by gender was found between the relationship of facilitating conditions and acceptance (β=–.406; P=.09).

Conclusions: Medical performance expectancy was the most significant predictor of AI-powered care pathway acceptance among medical professionals. Nonmedical performance expectancy, effort expectancy, perceived trust, and professional identity were also found to significantly influence the acceptance of AI-powered care pathways. These factors should be addressed for successful implementation of AI-powered care pathways in health care delivery. The study was limited to medical professionals in the Netherlands, where uptake of AI technologies is still in an early stage. Follow-up multinational studies should further explore the predictors of acceptance of AI-powered care pathways over time, in different geographies, and with bigger samples.

JMIR Form Res 2022;6(6):e33368

doi:10.2196/33368

Keywords



Health care systems are currently burdened owing to an aging population, increasing life expectancy, the development of expensive therapies, an inefficient design, and a growing demand for a good quality of care [1]. This has resulted in rising health care expenditure and threatened the accessibility of care [1]. Artificial Intelligence (AI), broadly defined as the capability of a machine to imitate intelligent human behavior [2], has the potential to help improve many of these challenges. Through the development of sophisticated algorithms, AI can assist in the diagnosis, monitoring, and treatment of patients, it can help streamline services and render administrative tasks more efficient [3]. Even though AI has already been proven beneficial in several health areas, such as clinical decision support, patient monitoring, health interventions, and health care administration [4-6], its impact on health care delivery has thus far remained limited [7].

Several barriers for entry have been identified, which explain the underuse of AI in health care delivery, including regulatory constraints, ethical considerations, lack of transparency, and the lack of facilitating conditions [8-10]. In addition, a crucial barrier for the implementation of AI-based technologies is the lack of adoption among medical professionals [9]. Moreover, individuals’ acceptance and utilization of technologies are proposed to be the most important factors for health technology adoption [11]. Currently, it is poorly understood what the reasons are for medical professionals (not) adopting AI technologies. Recently, some studies have been conducted to research the perspectives of the end users in the implementation of AI-based technologies, but more insight is essential [12].

The lack of understanding as to what informs the resistance among medical professionals in regard to the adoption of AI-based technologies can have important negative consequences, as it can limit and delay substantial improvements in health care delivery and result in wasted research and high design costs. Therefore, this study investigated medical professionals’ perspectives on the adoption of AI-powered care pathways by identifying which factors influence, and to what degree, the acceptability of AI-based technologies among these stakeholders.

This study focused on AI-powered care pathway technology. This technology enables the management of chronic diseases on a digital platform. All stakeholders (including medical professionals, patients, and caregivers), involved medical activities, and associated support programs are included in this platform. It enables medical professionals to constantly monitor their patient population’s disease activity and mental well-being through a patient app. The care pathways were designed to offer the right care at the right time and are continuously risk-adjusted using AI. This risk adjustment is created through several steps. The first step entails updating the patient’s data into the system. In the second step, the data are classified in the patient’s risk profile. In the third step, the risk profile–learning models and algorithms (based on AI) update the care pathway upon the most recent profile of the patient. From the update, a new recommendation is formulated (or not, if no alteration is necessary). Lastly, the medical professional can accept or reject the received recommendation based on various considerations and in dialogue with the patient.


Recruitment

The target population consisted of medical professionals who were employed at a Dutch hospital or other hospital staff who worked on improvement of the quality of care. Participants were mainly invited to participate through the physicians’ network (email, LinkedIn, virtual, and in-person meetings). A web-based survey was created using Qualtrics [13]. The data were gathered between the April 20 and June 1, 2021. The survey took approximately 7 minutes to complete.

Model

To determine factors that influence the acceptability of AI-powered care pathways among medical professionals, the validated Unified Theory of Acceptance and Use of Technology (UTAUT) model was extended and subsequently applied [14]. According to the UTAUT model, the acceptance of new technology can be measured by the behavioral intention (BI) to use a certain technology. The following predictor variables from UTAUT were included for the analysis: performance expectancy (PE; divided into medical and nonmedical), effort expectancy (EE), social influence (SI; divided in to social influence patients and medical), and facilitating conditions (FC) [14]. The original construct performance expectancy was divided into medical and nonmedical since Shaw et al [15] stated that AI-based technologies have different relevant tasks (clinical, epidemiological, and operational), and uncovering the value proposition between these tasks is an essential consideration for successful adoption. To uncover the value proposition for AI-powered care pathways, the construct of performance expectancy medical (clinical in article of Shaw et al [15]) and performance expectancy nonmedical (operational in article of Shaw et al [15]). The construct of social influence was divided since different studies highlighted that social influence is often studied from one influential group while neglecting influence of other groups [3,4,16,17]. Eckhardt et al [16] proposed to derive relevant influential groups and treat their different impacts with due respect. In light of this study, two main influential groups were identified, namely medical professionals and patients, resulting in the following constructs: social influence medical experts (SIME) and social influence patients (SIPA). The model was enriched with several variables that relevant scientific literature from multiple disciplines identified as playing a role in shaping the technology acceptance of AI-based technologies. These variables were the following: perceived trust (PT) [18], anxiety (AN) [19], professional Identity (PI) [20], and innovativeness (IN) [21] (Table 1). Furthermore, three moderators from the UTAUT model were included, namely age, gender, and experience [14]. In the original model, experience is defined as experience with the used technology. In the light of this study, this moderator was not applicable since AI-powered care pathways are in a premature stage of implementation. Therefore, the definition was changed to “Years of experience in the medical field“ [22]. An additional moderator, profession, was added since different professions have different responsibilities and tasks, which is hypothesized to influence the relationships between the predictor variables and the acceptance. A schematic overview of the model is shown in Figure 1.

Table 1. Definitions of the predictor variables for the behavioral intention to use artificial intelligence (AI)-powered care pathways.
ConstructOperational definition
Medical performance expectancyaDegree to which an individual believes that using AI-powered care pathways will help him or her to attain gains in terms of the provided quality of care [14,15]
Nonmedical performance expectancyaDegree to which an individual believes that using AI-powered care pathways will help him or her to attain gains in productivity, efficiency, and communication [14,15]
Effort expectancyDegree of ease associated with the use of the system [14]
Social influence patientsbDegree to which an individual perceives that patients believe that he or she should use the new system [14,16,17]
Social influence medicalbDegree to which an individual perceives that other medical organizations or colleagues believe that he or she should use the new system [14,16,17]
Facilitating conditionsDegree to which an individual believes that an organizational and technical infrastructure exists to support the use of the system [14]
Perceived trustUsers’ specific trust that AI-powered care pathways have the ability, integrity, and benevolence in providing their service [18]
AnxietyThe fear (eg, sadness, perception, and stress caused by stress-creating situations) experienced by an individual during their interaction with AI-powered care pathways [19]
Professional identityThe attitudes, values, knowledge, beliefs, and skills that are shared with others within a professional role being undertaken by the individual [23]
InnovativenessDegree to which an individual is relatively earlier in adopting an innovation than other members of his (social) system [21]

aThe original determinant in the Unified Theory of Acceptance and Use of Technology (UTAUT) model of performance expectancy was divided in two separate variables since performance expectancy for AI-powered care pathways can be viewed from a medical and nonmedical perspective.

bThe original determinant in the UTAUT model of social influence was divided in two separate variables since it is hypothesized that patients and medical organizations or colleagues have different influences.

Figure 1. Overview of the conceptual model used in this study. The predictor variables (performance expectancy, effort expectancy, social influence, facilitating conditions, perceived trust, anxiety, professional identity, and innovativeness) are hypothesized to influence the variance the acceptance of AI-powered care pathways. The moderators (age, gender, experience, and profession) are hypothesized to influence the relationship between the predicator variables and the dependent variable. AI: artificial intelligence, UTAUT: Unified Theory of Acceptance and Use of Technology.
View this figure

Survey

The survey contained questions about demographics including age, gender, experience, and profession. Then, the survey participants were invited to rate statements concerning the constructs that needed to be ranked using a 5-point Likert scale (1=totally disagree, 5=totally agree). The survey items were formulated by adopting statements from prior research and by developing new statements within the research group (Multimedia Appendix 1 for statements). Before collecting the data, the survey was tested extensively through a pilot study with graduate students, individuals who were not familiar with AI-powered care pathways, and DEARhealth. staff. This pilot tested for confusing formulations, lay-out problems, the approximate time to complete the survey, and the technical resources needed. Where needed, adjustments were subsequently made.

Statistical Analysis

Measurement Model Testing

To assess the reliability of the measurements, the internal consistency was tested using Cronbach α values. The commonly used rule of thumb for Cronbach α was used where a value is acceptable above .6, questionable between .5 and .6, and unacceptable below .5 [24]. Methods to try and ensure reliability such as item removal were performed when problems of reliability arose. Furthermore, Pearson correlations between predictor variables were tested to rule out any internal relations, a rule of thumb of >0.7 was used.

Relationship Testing

The data were analyzed using SPSS (version 26; IBM Corp) including the extension PROCESS macro developed by [25]. To exclude responses with missing data, a data clean was conducted. A descriptive analysis to get an insight into the respondents' demographic characteristics was conducted. Then, a multiple linear regression analysis was performed to test the contribution of each predictor variable on the variance of BI. Before conducting the multiple linear regression analysis, the assumptions were checked to rule out violations. The checked violations were linearity, multivariate outliers, heteroscedasticity, and multicollinearity. A P value of <.01 was considered significant (Multimedia Appendix 2). Lastly, moderation analysis was conducted using PROCESS model 1. The moderation analysis used hierarchical multiple regression with an interaction term.

Ethical Considerations

No additional ethical approval was needed according to the online check performed using the web-based BETCHIE test of the Beta faculty of Vrije Universiteit Amsterdam, which indicated that the target group was not considered a vulnerable group in this research. The privacy of the respondents was ensured by anonymizing the survey in Qualtrics. The researchers could not track the source of the survey, and no private information was collected. Participating in this research was voluntary.


Participants

In total, 111 health care professionals started the survey. After excluding respondents with missing answers (n=41) or monotone answers (n=3), 67 remained. Of the 67 participants, 41 (61.2%) identified as female and 26 (38.8%) as male. The age distribution in this research was the following: <35 years (20/67, 29.9%), 35-55 years (n=27, 40.3%), and >55 years (n=20, 29.9%). For the different medical professions, an overrepresentation of physicians (n=28, 41.8%) was identified as compared to the number of nurses (n=14, 20.9%) and nurse specialists (n=2, 3.0%) (Table 2).

Table 2. Participant demographics (N=67).
CharacteristicsParticipants, nParticipants, %
Gender

Male2638.8

Female4161.2
Age (years)

18-2457.5

25-341522.4

35-441623.9

45-541116.4

55-641725.4

≥6534.5
Experience in the medical field

≤257.5

3-51014.9

6-10710.4

11-201928.4

21-301420.9

≥311217.9
Profession

Physician2841.8

Nurse specialist23.0

Nurse1420.9

Management1116.4

Consultant1116.4

Other function in hospital1319.4

Outcomes

Measurement Testing Findings

A Cronbach α score was calculated for each construct of the model to validate the internal consistency of the measurement statements within the variable (Table 3). The variable of SIME showed a Cronbach α below .50 and was removed from the analysis. The Pearson correlation coefficients between any of the predictor variables did not exceed 0.7, indicating an acceptable correlation between the predictors (Multimedia Appendix 3).

Table 3. Internal reliability of the constructs based on the 3 statements using Cronbach α values. Social influence medical experts (SIME) and facilitating conditions (FCs) showed unacceptable internal consistency (Cronbach α<.5). Item removal resulted in a better Cronbach α for facilitating conditions.
VariableCronbach α
Innovativeness.706
Anxiety.701
FC→FC1+FC2 (item removal of FC3).455 → .512
Nonmedical performance expectancy.662
Social influence patients.667
Medical performance expectancy.638
Social influence medical experts.244
Professional identity.748
Perceived trust.717
Effect expectancy.816
Behavioral intention.916
Regression Outcomes

The results of multiple linear regression analysis showed significant relationships between the predictor variables and the acceptance of AI-powered care pathways. Overall, the results show that 75.4% of the variance in the acceptance can be explained by the independent variables of the model (adjusted R2=0.754; F9,0=22.548; P<.001). From the data, it can be concluded that the model is highly significant (P<.001). The analysis indicated that the variables medical performance expectancy (MEPE; β=.465; P<.001), nonmedical performance expectancy (NMPE; β=.172; P=.08), PT (β=.221; P=.007), and PI (β=.156; P=.06) had a significant positive effect on the acceptance of AI-powered care pathways (Figure 2 and Multimedia Appendix 4). Both EE (β=–.215; P=.005) and FC (β=–.160; P=.005) were found to have a negative impact on acceptance. From the magnitude of the β statistics, MEPE had the biggest impact on variance followed by PT, EE, NMPE, FC, and PI. Some variables did not show a significant result, including SIPA (β=.042; P=.63), AN (β=.021; P=.84), and IN (β=.078; P=.30).

Figure 2. Overview of the individual relationships of the predictor variables and the acceptance of artificial intelligence–powered care pathways. Medical performance expectancy (MEPE), nonmedical performance expectancy, effort expectancy, facilitating condition, perceived trust, and professional identity showed a significantly influential relationship on acceptance where MEPE had the largest impact. Social influence patient, anxiety, and innovativeness did not show a significant relationship on the variance in acceptance. The predicator variable social influence medical was excluded from the analysis since it showed a poor internal consistency. n.s.: not significant.
View this figure
Moderating Outcomes

The moderating effects of gender, age, experience, and profession were each tested for the relationships between the individual predictor variables and the acceptance to use AI-powered care pathways (Multimedia Appendix 5). Gender had a significant moderating effect on the relationship between facilitating conditions and the acceptance to use AI-powered care pathways (β=–.406; P=.09), indicating that being male had a positive moderating effect and female had a negative moderating effect (Figure 3). No other significant moderators were identified.

Figure 3. Moderating effect of gender. Being a male had a positive moderating effect whereas being a female had a negative impact.
View this figure

Principal Findings

This study investigated the technology acceptance of AI-powered care pathways among medical professionals. The model explained 75.4% of the variance in acceptance of the medical professionals. The predictor variables MEPE, NMPE, EE, FC, PT, and PI were found to significantly influence the acceptance of AI-powered care pathways. SIPA, SIME, IN, and AN were not found to significantly influence the behavioral intention. One moderating relationship was found; gender moderates the relationship between FC and acceptance, with identifying as male increasing the likelihood of accepting AI-powered care pathways.

Comparison With Prior Work

The predictor MEPE was found to have the highest impact on the acceptance of AI-powered care pathways. Several studies on the acceptance of health technology also identified performance expectancy as the main predictor [12,14,26-29]. The Predictor PEME was also found to be most important goal of physicians in the qualitative study of Lai et al [12]—they concluded that providing the best care for the patients was the main goal of physicians, and if AI-based technologies could enhance that, they were not opposed to change and the use of AI-based technologies. These findings confirm that medical professionals are more willing to use AI-powered care pathways when they see the benefits and added value considering the quality of care. Interestingly, when comparing the magnitude of the β values, MEPE was found to have more than double the influential strength compared to NMPE, implying that the perceived added value in terms of work efficiency, productivity, and communication has a lower impact on the acceptance than the perceived added value for the quality of care. This finding is in line with that of Shaw et al [15], where they stated that it is important to look at the value proposition between different added values a technology can bring. This finding suggests that the medical professionals in this study focus on the clinical relevance of AI-powered care pathways and that they would be more interested in the integrated care approach they would facilitate. Another possible explanation for this considerable difference in magnitude could be that medical professionals are less aware about the added value that AI-powered care pathways have regarding efficiency, productivity, and communication as most information about AI in health care tends to focus on the impact it can have on clinical aspects. This might have resulted in NMPE being less influential than MEPE in the variance of the acceptance.

A positive impact of PI was found, implying that if medical professionals perceive AI-powered care pathways as a positive stimulus to their career growth, professional status, and financial situation, they would be more willing to use the technology, and vice versa. This finding confirms the results of Jussupow et al [20], who proposed that for the successful implementations of information technology, especially AI in health care, it is crucial to identify and address professional identity threats. Currently, none of the popular technology acceptance models (UTAUT, Technology Acceptance Model, Diffusion of Innovation, and Technology Readiness Phases) include a determinant considering the influence of PI. This study highlights the importance of involving PI when studying the acceptance of target groups with a strong PI or social status and when dealing with AI-based technologies.

Social influence was not found to be an influential factor (both SIME and SIPA), which is contrary to several other studies that highlighted its importance for the adoption of new technologies [14,30]. The lack of impact of social influence in this study could be explained by the premature-stage AI-powered care pathways that the technology is in, as key opinion leaders are still absent and insufficient successful examples are present [31]. Future research should confirm if this is indeed the case. Furthermore, COVID-19 could have lowered the prioritization of AI-based technologies since the pandemic pressured the medical professionals, and no additional time was available to focus on AI-based technologies. This may have shaped their priorities and interests when they filled in the survey in ways that possibly rendered insignificant the effect of social influence. One could also argue that social influence not only shapes and helps direct the activities and approaches of medical professionals, but also is itself influenced by socioeconomic circumstances or changes along with them. During the pandemic, social influence may therefore have focused on aspects or technologies more readily directed at managing the pandemic.

This study indicated that a higher perception of the availability of FC had a negative impact on the acceptance of AI-powered care pathways, which is contrary to previous studies [32]. This implies that if medical professionals perceive the FC in their medical organization—including training and technological resources—as better, they would be less likely to accept AI-powered care pathways. This could be explained by medical professionals perceiving good FCs as a workload increase due to additional trainings and technical tasks. Interestingly, it was found that gender moderates this relationship where identifying as female had a negative moderating effect, and identifying as male a positive moderating effect. This finding is in line with that reported by Haluza and Wernhart [33], who stated that there are gender divergences that are important to incorporate when formulating a new strategy for eHealth and telemedicine implementation.

However, narrowing in on the characteristics of the specific respondent groups reveals a nonrandom sample in terms of different medical professions (Multimedia Appendix 6); all nurses included in this study identified as female, whereas physicians largely identified as male. This nonrandom sample could have influenced the moderating effect of gender given the way in which tasks and responsibilities are distributed among these professional categories. Since nurses continue to perform more administrative work compared to physicians, they may view good FC as a constraint in that it may add to their already broad repertoire of tasks, whereas physicians may view FC as a helping tool particularly given their focus on MEPE [34]. In addition, the speed of technological advances in the work field requires continuous development of new skills, which might be more challenging to cope with for nurses owing to the highly varied nature of their tasks. Therefore, they may perceive the better FC more as a demand to keep up with the fast digitalization [35,36]. Besides the difference in job-specific tasks, the differences in professional identity and distribution of professional rewards based on the acquisition of new skills could also have an influence. Traditionally, a greater focus has been placed on the need for physicians to keep up to date with the latest clinical insights and approaches, and differences in professional status and social standing have often been derived on the basis of their frequent participation to such activities. In contrast, even though important transformations have taken place over the last decades in this sense, the main task of nurses is still seen by many as the provision of care, often understood as a quality that nurses somehow naturally possess rather than a set of skills that could be trained and fostered [37]. Thus, to the extent that such trainings may not lead to obvious professional rewards, nurses may see them more as a constraint and an imposition rather than as an opportunity. Furthermore, this result may have also been influenced by the broader and often gendered realities of nurses’ lives, where family duties and other caring obligations outside their professional roles may prevent them from wanting or being able to take on new work roles and responsibilities. However, regional differences in the professional identity of nurses and physicians were found [38]. The limited sample in this study did not allow us to unambiguously prove if FC is indeed influenced by the profession or if it is mainly caused by gender. However, it is strongly suggested that both profession and gender play a role in how the perceived FC influences acceptance, so future studies should explore the relationship between these two variables and the underlying reasoning.

Strengths and Limitations

To our knowledge, this is the first study assessing the predictors of acceptance of AI-based technologies among medical professionals, thereby contributing to a poorly understood but increasingly relevant research area. A strength of this research was that it succeeded to identify significant relationships that influence acceptance. Another strength was the successful utilization of the UTAUT model and extensions of the model. This creates a foundation for future research in the acceptance of AI-based technologies. Furthermore, the quantitative nature of this study allows for more generalizable results and facilitates comparisons with future studies.

Some limitations were present in this study. Selection bias was unavoidable since the respondents participated voluntarily on the internet, which might have resulted in more individuals with an enthusiasm and interest about AI in health care. The selection bias could have been increased by the recruitment via the physicians’ network. This might have resulted in more positive results since this network contains a lot of medical professionals with an interest in health technology.

The results revealed approximately 40% responses with missing values. Most health care professionals stopped the survey at the information page about AI-powered care pathways. This page required some reading and thus some effort to learn about AI-powered care pathways. Even though efforts made for the information provided about the AI-powered care pathways were succinct, the health care professionals may not have had the time to read these materials owing to the increased work pressure they experienced during the COVID-19 pandemic. Future iterations of this study should also interrogate respondents about the modalities through which they would be most successfully informed about these technologies when they are implemented. Visual or video materials might be more helpful when engaging with very busy professionals.

Furthermore, AI-powered care pathways are in the beginning of the implementation phase and therefore did not include the actual use behavior of AI-powered care pathways. This study could therefore not show if the acceptance is valid for predicting the actual use behavior.

Last, the used measures should be tested regarding their psychometric properties. Even though the constructs used in this study were mainly based upon validated models, the usefulness in the context of AI-powered care pathways needs to be further investigated. In addition, new constructs were added and some constructs were adjusted, which requires further investigation. The internal consistency of the constructs was tested with the Cronbach α. Two constructs (FC and SIME) showed low internal consistency. FC was still included owing to the exploratory nature; we did consider lower Cronbach α values (<.5) since it adds critical information to the research. The internal consistency of SIME did not allow for its inclusion in deregression. This is a limitation since this study misses a potential predictor and it could still have contributed to nonzero amounts to the explained variance in the case of correlated regressors, which can be done by influencing other significant regressors.

Future Implications

This research should function as a foundation for future longitudinal research. Future research could identify if acceptance differs over adoption steps and when more awareness about the technology is present. This study was conducted in quite a premature stage where actual use is still limited.

Furthermore, future research should identify if the used model is applicable in different health care systems or in other regions of the world. Since this research was conducted in the Netherlands and included all type of medical organizations, variations between organizational cultures, differences in professional identity, and the difference in public opinion about AI were not taken into account. Insight into these differences could help develop adequate implementation strategies per region and organization.

Adaptations were made to make the model fit better to the research aim. Future studies should focus on further validating the model in the context of AI-based technologies, especially the construct with poor internal consistency.

Since performance expectancy was found as the strongest predictor for the acceptance of AI-powered care pathways, this should be high priority during implementation of AI-based health technologies. The added value of these technologies should be clearly communicated to the end users. PT was the second most influencing variable for the acceptance of AI-powered care pathways. Strategies on how to increase trust in AI-based technologies should therefore be formulated for successful adoption in health care. Even though trust is found to be an important facilitator for acceptance, future research should not only focus on how to increase trust but also what effect this trust has on the actual use, since studies found that people tend to overtrust and misinterpret the outcomes of AI-based decision support [39-41].

The quantitative nature of this study did not allow us to understand the medical professionals’ reasoning underlying the found outcomes. Future qualitative studies are therefore recommended to understand how specific personality traits, the amount of understanding of AI-powered care pathways, or other contextual factors influence the acceptance of AI-based technologies.

Conclusions

This study sheds light on what factors have the largest impact on the acceptance of AI-powered care pathways among hospital staff and medical professionals. The model explained 75.4% of the variance in the behavioral intention. MEPE, NMPE, EE, PT, and PI were found to significantly influence behavioral intention where medical performance expectancy was found to have the largest impact. The moderator gender was found to significantly influence the relationship between facilitating conditions and acceptance. Since this study was conducted among Dutch medical professionals over a limited period of time and at a stage where the implementation of these technologies is still limited, follow-up surveys and multinational studies could further explore the predictors of acceptance of AI-powered care pathways over time and in different context.

Acknowledgments

We would like to thank Mirjam van der Steen and Katinka de Korte for their support during the study. We would also like to thank all the respondents who participated in our research.

Conflicts of Interest

VvB, LW, and DH are employed by DEARhealth, The Netherlands.

Multimedia Appendix 1

Survey items with the corresponding item sources.

DOC File , 87 KB

Multimedia Appendix 2

Graphs and tables for assumption testing multiple linear regression.

DOCX File , 171 KB

Multimedia Appendix 3

Pearson correlations between the variables.

DOC File , 55 KB

Multimedia Appendix 4

Results from the multiple linear regression indicating the relationship between the predictor variables and the behavioral intention to use AI-powered care pathways.

DOC File , 56 KB

Multimedia Appendix 5

Hayes’ PROCESS regression matrix for the moderating effects on the relationships between the predictor variables and the behavioral intention to use AI-powered care pathways. The Coefficient, standard error and P-value of the interaction terms are shown.

DOC File , 82 KB

Multimedia Appendix 6

Cross Table for gender x profession.

DOC File , 48 KB

  1. Johansen F, Loorbach D, Stoopendaal A. Exploring a transition in Dutch healthcare. J Health Organ Manag 2018 Oct 08;32(7):875-890 [FREE Full text] [CrossRef] [Medline]
  2. Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol 2019 Apr;28(2):73-81. [CrossRef] [Medline]
  3. Tekkeşin. Artificial Intelligence in Healthcare: Past, Present and Future. Anatol J Cardiol 2019 Oct;22(Suppl 2):8-9 [FREE Full text] [CrossRef] [Medline]
  4. Panesar A. Machine Learning and AI Ethics. In: Machine Learning and AI for Healthcare: Big Data for Improved Health Outcomes. Berkeley, CA: Apress; 2020:207-247.
  5. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019 Jun;6(2):94-98 [FREE Full text] [CrossRef] [Medline]
  6. Reddy S, Fox J, Purohit M. Artificial intelligence-enabled healthcare delivery. J R Soc Med 2019 Jan;112(1):22-28 [FREE Full text] [CrossRef] [Medline]
  7. Higgins D, Madai V. From Bit to Bedside: A Practical Framework for Artificial Intelligence Product Development in Healthcare. Adv Intell Syst 2020 Jul 02;2(10):2000052 [FREE Full text] [CrossRef]
  8. Ahmad O, Stoyanov D, Lovat L. Barriers and pitfalls for artificial intelligence in gastroenterology: Ethical and regulatory issues. TIGE 2020 Apr;22(2):80-84 [FREE Full text] [CrossRef]
  9. Singh R, Hom G, Abramoff M, Campbell J, Chiang M, AAO Task Force on Artificial Intelligence. Current Challenges and Barriers to Real-World Artificial Intelligence Adoption for the Healthcare System, Provider, and the Patient. Transl Vis Sci Technol 2020 Aug;9(2):45 [FREE Full text] [CrossRef] [Medline]
  10. Wang F, Kaushal R, Khullar D. Should Health Care Demand Interpretable Artificial Intelligence or Accept "Black Box" Medicine? Ann Intern Med 2020 Jan 07;172(1):59-60. [CrossRef] [Medline]
  11. Selder A. Physician reimbursement and technology adoption. J Health Econ 2005 Sep;24(5):907-930. [CrossRef] [Medline]
  12. Laï MC, Brian M, Mamzer M. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med 2020 Jan 09;18(1):14 [FREE Full text] [CrossRef] [Medline]
  13. White K. Web-Based Questionnaire. In: Encyclopedia of Quality of Life and Well-Being Research. Dordrecht: Springer; 2014:301-311.
  14. Venkatesh V, Morris M, Davis G, Davis F. User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly 2003;27(3):425-478 [FREE Full text] [CrossRef]
  15. Shaw J, Rudzicz F, Jamieson T, Goldfarb A. Artificial Intelligence and the Implementation Challenge. J Med Internet Res 2019 Jul 10;21(7):e13659 [FREE Full text] [CrossRef] [Medline]
  16. Eckhardt A, Laumer S, Weitzel T. Who Influences Whom? Analyzing Workplace Referents’ Social Influence on it Adoption and Non-Adoption. J Info Technol 2009 Mar 01;24(1):11-24. [CrossRef]
  17. Samhan B. Revisiting Technology Resistance: Current Insights and Future Directions. AJIS 2018 Jan 22;22. [CrossRef]
  18. Pavlou P. Consumer Acceptance of Electronic Commerce: Integrating Trust and Risk with the Technology Acceptance Model. Int J Electron Commer 2014 Dec 23;7(3):101-134 [FREE Full text] [CrossRef]
  19. Simonson M, Maurer M, Montag-Torardi M, Whitaker M. Development of a Standardized Test of Computer Literacy and a Computer Anxiety Index. Journal of Educational Computing Research 1995 Jan 01;3(2):231-247 [FREE Full text] [CrossRef]
  20. Jussupow E, Spohrer K, Heinzl A, Link C. I am; We are - Conceptualizing Professional Identity Threats from Information Technology. 2018 Presented at: International Conference on Information Systems; December 13-16, 2018; San Francisco, CA.
  21. Agarwal R, Prasad J. A Conceptual and Operational Definition of Personal Innovativeness in the Domain of Information Technology. Inf Syst Res 1998 Jun;9(2):204-215 [FREE Full text] [CrossRef]
  22. Chen IJ, Yang KF, Tang FI, Huang CH, Yu S. Applying the technology acceptance model to explore public health nurses' intentions towards web-based learning: a cross-sectional questionnaire survey. Int J Nurs Stud 2008 Jun;45(6):869-878. [CrossRef] [Medline]
  23. Adams K, Hean S, Sturgis P, Clark J. Investigating the factors influencing professional identity of first-year health and social care students. Learn Health Soc Care 2006 Jun;5(2):55-68 [FREE Full text] [CrossRef]
  24. Gliem J, Gliem R. Calculating, Interpreting, and Reporting Cronbach’s Alpha Reliability Coefficient for Likert-Type Scales. 2003 Presented at: 2003 Midwest Research to Practice Conference in Adult, Continuing, and Community Education; 2003; Columbus, OH   URL: http://hdl.handle.net/1805/344
  25. Hayes AF. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. New York, NY: Guilford Publications; 2013.
  26. Jun S, Plint A, Campbell S, Curtis S, Sabir K, Newton A. Point-of-care Cognitive Support Technology in Emergency Departments: A Scoping Review of Technology Acceptance by Clinicians. Acad Emerg Med 2018 May;25(5):494-507 [FREE Full text] [CrossRef] [Medline]
  27. Tavares J, Oliveira T. Electronic Health Record Patient Portal Adoption by Health Care Consumers: An Acceptance Model and Survey. J Med Internet Res 2016 Mar 02;18(3):e49 [FREE Full text] [CrossRef] [Medline]
  28. van der Vaart R, Atema V, Evers A. Guided online self-management interventions in primary care: a survey on use, facilitators, and barriers. BMC Fam Pract 2016 Mar 09;17:27 [FREE Full text] [CrossRef] [Medline]
  29. Wu Y, Tao Y, Yang P. The use of unified theory of acceptance and use of technology to confer the behavioral model of 3G mobile telecommunication users. J Stat Manag Sys 2008 Sep;11(5):919-949 [FREE Full text] [CrossRef]
  30. Alalwan A, Dwivedi Y, Rana N. Factors influencing adoption of mobile banking by Jordanian bank customers: Extending UTAUT2 with trust. Int J Info Manag 2017 Jun;37(3):99-110 [FREE Full text] [CrossRef]
  31. Hao H, Padman R. An empirical study of opinion leader effects on mobile technology implementation by physicians in an American community health system. Health Informatics J 2018 Sep;24(3):323-333 [FREE Full text] [CrossRef] [Medline]
  32. Bhattacherjee A, Hikmet N. Physicians' resistance toward healthcare information technology: a theoretical model and empirical test. Eur J Inf Syst 2017 Dec 19;16(6):725-737 [FREE Full text] [CrossRef]
  33. Haluza D, Wernhart A. Does gender matter? Exploring perceptions regarding health technologies among employees and students at a medical university. Int J Med Inform 2019 Oct;130:103948. [CrossRef] [Medline]
  34. Chang I, Hwang H, Hung W, Li Y. Physicians’ acceptance of pharmacokinetics-based clinical decision support systems. Expert Syst Appl 2007 Aug;33(2):296-303 [FREE Full text] [CrossRef]
  35. De Leeuw JA, Woltjer H, Kool RB. Identification of Factors Influencing the Adoption of Health Information Technology by Nurses Who Are Digitally Lagging: In-Depth Interview Study. J Med Internet Res 2020 Aug 14;22(8):e15630 [FREE Full text] [CrossRef] [Medline]
  36. McGrath M. The challenges of caring in a technological environment: critical care nurses' experiences. J Clin Nurs 2008 Apr;17(8):1096-1104. [CrossRef] [Medline]
  37. ten Hoeve Y, Jansen G, Roodbol P. The nursing profession: public image, self-concept and professional identity. A discussion paper. J Adv Nurs 2014 Feb;70(2):295-309. [CrossRef] [Medline]
  38. Kalisch B, Begeny S, Neumann S. The image of the nurse on the internet. Nurs Outlook 2007;55(4):182-188. [CrossRef] [Medline]
  39. Eiband M, Buschek D, Kremer A, Hussmann H. The Impact of Placebic Explanations on Trust in Intelligent Systems. 2019 Presented at: CHI Conference on Human Factors in Computing Systems; May 4-9, 2019; Glasgow   URL: https://doi.org/10.1145/3290607.3312787 [CrossRef]
  40. Ehsan U, Passi S, Liao Q, Chan L, Lee I, Muller M, et al. The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations. arXiv. Preprint posted online July 28, 2021 [FREE Full text] [CrossRef]
  41. Wu E, Wu K, Daneshjou R, Ouyang D, Ho DE, Zou J. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nat Med 2021 Apr;27(4):582-584. [CrossRef] [Medline]


AI: artificial intelligence
AN: anxiety
BI: behavioral intention
EE: effort expectancy
FC: facilitating condition
IN: innovativeness
MEPE: medical performance expectancy
NMPE: nonmedical performance expectancy
PI: professional identity
PT: perceived trust
SI: social influence
SIME: social influence medical experts
SIPA: social influence patients
UTAUT: Unified Theory of Acceptance and Use of Technology


Edited by A Mavragani; submitted 04.09.21; peer-reviewed by J Tavares, JA Benítez-Andrades, D Valero-Bover; comments to author 21.12.21; revised version received 14.04.22; accepted 02.05.22; published 21.06.22

Copyright

©Lisa Cornelissen, Claudia Egher, Vincent van Beek, Latoya Williamson, Daniel Hommes. Originally published in JMIR Formative Research (https://formative.jmir.org), 21.06.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.