This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.
The emergence of Artificial Intelligence (AI) has been proven beneficial in several health care areas. Nevertheless, the uptake of AI in health care delivery remains poor. Despite the fact that the acceptance of AI-based technologies among medical professionals is a key barrier to their implementation, knowledge about what informs such attitudes is scarce.
The aim of this study was to identify and examine factors that influence the acceptability of AI-based technologies among medical professionals.
A survey was developed based on the Unified Theory of Acceptance and Use of Technology model, which was extended by adding the predictor variables perceived trust, anxiety and innovativeness, and the moderator profession. The web-based survey was completed by 67 medical professionals in the Netherlands. The data were analyzed by performing a multiple linear regression analysis followed by a moderating analysis using the Hayes PROCESS macro (SPSS; version 26.0, IBM Corp).
Multiple linear regression showed that the model explained 75.4% of the variance in the acceptance of AI-powered care pathways (adjusted
Medical performance expectancy was the most significant predictor of AI-powered care pathway acceptance among medical professionals. Nonmedical performance expectancy, effort expectancy, perceived trust, and professional identity were also found to significantly influence the acceptance of AI-powered care pathways. These factors should be addressed for successful implementation of AI-powered care pathways in health care delivery. The study was limited to medical professionals in the Netherlands, where uptake of AI technologies is still in an early stage. Follow-up multinational studies should further explore the predictors of acceptance of AI-powered care pathways over time, in different geographies, and with bigger samples.
Health care systems are currently burdened owing to an aging population, increasing life expectancy, the development of expensive therapies, an inefficient design, and a growing demand for a good quality of care [
Several barriers for entry have been identified, which explain the underuse of AI in health care delivery, including regulatory constraints, ethical considerations, lack of transparency, and the lack of facilitating conditions [
The lack of understanding as to what informs the resistance among medical professionals in regard to the adoption of AI-based technologies can have important negative consequences, as it can limit and delay substantial improvements in health care delivery and result in wasted research and high design costs. Therefore, this study investigated medical professionals’ perspectives on the adoption of AI-powered care pathways by identifying which factors influence, and to what degree, the acceptability of AI-based technologies among these stakeholders.
This study focused on AI-powered care pathway technology. This technology enables the management of chronic diseases on a digital platform. All stakeholders (including medical professionals, patients, and caregivers), involved medical activities, and associated support programs are included in this platform. It enables medical professionals to constantly monitor their patient population’s disease activity and mental well-being through a patient app. The care pathways were designed to offer the right care at the right time and are continuously risk-adjusted using AI. This risk adjustment is created through several steps. The first step entails updating the patient’s data into the system. In the second step, the data are classified in the patient’s risk profile. In the third step, the risk profile–learning models and algorithms (based on AI) update the care pathway upon the most recent profile of the patient. From the update, a new recommendation is formulated (or not, if no alteration is necessary). Lastly, the medical professional can accept or reject the received recommendation based on various considerations and in dialogue with the patient.
The target population consisted of medical professionals who were employed at a Dutch hospital or other hospital staff who worked on improvement of the quality of care. Participants were mainly invited to participate through the physicians’ network (email, LinkedIn, virtual, and in-person meetings). A web-based survey was created using Qualtrics [
To determine factors that influence the acceptability of AI-powered care pathways among medical professionals, the validated Unified Theory of Acceptance and Use of Technology (UTAUT) model was extended and subsequently applied [
Definitions of the predictor variables for the behavioral intention to use artificial intelligence (AI)-powered care pathways.
Construct | Operational definition |
Medical performance expectancya | Degree to which an individual believes that using AI-powered care pathways will help him or her to attain gains in terms of the provided quality of care [ |
Nonmedical performance expectancya | Degree to which an individual believes that using AI-powered care pathways will help him or her to attain gains in productivity, efficiency, and communication [ |
Effort expectancy | Degree of ease associated with the use of the system [ |
Social influence patientsb | Degree to which an individual perceives that patients believe that he or she should use the new system [ |
Social influence medicalb | Degree to which an individual perceives that other medical organizations or colleagues believe that he or she should use the new system [ |
Facilitating conditions | Degree to which an individual believes that an organizational and technical infrastructure exists to support the use of the system [ |
Perceived trust | Users’ specific trust that AI-powered care pathways have the ability, integrity, and benevolence in providing their service [ |
Anxiety | The fear (eg, sadness, perception, and stress caused by stress-creating situations) experienced by an individual during their interaction with AI-powered care pathways [ |
Professional identity | The attitudes, values, knowledge, beliefs, and skills that are shared with others within a professional role being undertaken by the individual [ |
Innovativeness | Degree to which an individual is relatively earlier in adopting an innovation than other members of his (social) system [ |
aThe original determinant in the Unified Theory of Acceptance and Use of Technology (UTAUT) model of performance expectancy was divided in two separate variables since performance expectancy for AI-powered care pathways can be viewed from a medical and nonmedical perspective.
bThe original determinant in the UTAUT model of social influence was divided in two separate variables since it is hypothesized that patients and medical organizations or colleagues have different influences.
Overview of the conceptual model used in this study. The predictor variables (performance expectancy, effort expectancy, social influence, facilitating conditions, perceived trust, anxiety, professional identity, and innovativeness) are hypothesized to influence the variance the acceptance of AI-powered care pathways. The moderators (age, gender, experience, and profession) are hypothesized to influence the relationship between the predicator variables and the dependent variable. AI: artificial intelligence, UTAUT: Unified Theory of Acceptance and Use of Technology.
The survey contained questions about demographics including age, gender, experience, and profession. Then, the survey participants were invited to rate statements concerning the constructs that needed to be ranked using a 5-point Likert scale (1=totally disagree, 5=totally agree). The survey items were formulated by adopting statements from prior research and by developing new statements within the research group (
To assess the reliability of the measurements, the internal consistency was tested using Cronbach
The data were analyzed using SPSS (version 26; IBM Corp) including the extension PROCESS macro developed by [
No additional ethical approval was needed according to the online check performed using the web-based BETCHIE test of the Beta faculty of Vrije Universiteit Amsterdam, which indicated that the target group was not considered a vulnerable group in this research. The privacy of the respondents was ensured by anonymizing the survey in Qualtrics. The researchers could not track the source of the survey, and no private information was collected. Participating in this research was voluntary.
In total, 111 health care professionals started the survey. After excluding respondents with missing answers (n=41) or monotone answers (n=3), 67 remained. Of the 67 participants, 41 (61.2%) identified as female and 26 (38.8%) as male. The age distribution in this research was the following: <35 years (20/67, 29.9%), 35-55 years (n=27, 40.3%), and >55 years (n=20, 29.9%). For the different medical professions, an overrepresentation of physicians (n=28, 41.8%) was identified as compared to the number of nurses (n=14, 20.9%) and nurse specialists (n=2, 3.0%) (
Participant demographics (N=67).
Characteristics | Participants, n | Participants, % | |||
|
|||||
|
Male | 26 | 38.8 | ||
|
Female | 41 | 61.2 | ||
|
|||||
|
18-24 | 5 | 7.5 | ||
|
25-34 | 15 | 22.4 | ||
|
35-44 | 16 | 23.9 | ||
|
45-54 | 11 | 16.4 | ||
|
55-64 | 17 | 25.4 | ||
|
≥65 | 3 | 4.5 | ||
|
|||||
|
≤2 | 5 | 7.5 | ||
|
3-5 | 10 | 14.9 | ||
|
6-10 | 7 | 10.4 | ||
|
11-20 | 19 | 28.4 | ||
|
21-30 | 14 | 20.9 | ||
|
≥31 | 12 | 17.9 | ||
|
|||||
|
Physician | 28 | 41.8 | ||
|
Nurse specialist | 2 | 3.0 | ||
|
Nurse | 14 | 20.9 | ||
|
Management | 11 | 16.4 | ||
|
Consultant | 11 | 16.4 | ||
|
Other function in hospital | 13 | 19.4 |
A Cronbach
Internal reliability of the constructs based on the 3 statements using Cronbach
Variable | Cronbach |
Innovativeness | .706 |
Anxiety | .701 |
FC→FC1+FC2 (item removal of FC3) | .455 → .512 |
Nonmedical performance expectancy | .662 |
Social influence patients | .667 |
Medical performance expectancy | .638 |
Social influence medical experts | .244 |
Professional identity | .748 |
Perceived trust | .717 |
Effect expectancy | .816 |
Behavioral intention | .916 |
The results of multiple linear regression analysis showed significant relationships between the predictor variables and the acceptance of AI-powered care pathways. Overall, the results show that 75.4% of the variance in the acceptance can be explained by the independent variables of the model (adjusted
Overview of the individual relationships of the predictor variables and the acceptance of artificial intelligence–powered care pathways. Medical performance expectancy (MEPE), nonmedical performance expectancy, effort expectancy, facilitating condition, perceived trust, and professional identity showed a significantly influential relationship on acceptance where MEPE had the largest impact. Social influence patient, anxiety, and innovativeness did not show a significant relationship on the variance in acceptance. The predicator variable social influence medical was excluded from the analysis since it showed a poor internal consistency. n.s.: not significant.
The moderating effects of gender, age, experience, and profession were each tested for the relationships between the individual predictor variables and the acceptance to use AI-powered care pathways (
Moderating effect of gender. Being a male had a positive moderating effect whereas being a female had a negative impact.
This study investigated the technology acceptance of AI-powered care pathways among medical professionals. The model explained 75.4% of the variance in acceptance of the medical professionals. The predictor variables MEPE, NMPE, EE, FC, PT, and PI were found to significantly influence the acceptance of AI-powered care pathways. SIPA, SIME, IN, and AN were not found to significantly influence the behavioral intention. One moderating relationship was found; gender moderates the relationship between FC and acceptance, with identifying as male increasing the likelihood of accepting AI-powered care pathways.
The predictor MEPE was found to have the highest impact on the acceptance of AI-powered care pathways. Several studies on the acceptance of health technology also identified performance expectancy as the main predictor [
A positive impact of PI was found, implying that if medical professionals perceive AI-powered care pathways as a positive stimulus to their career growth, professional status, and financial situation, they would be more willing to use the technology, and vice versa. This finding confirms the results of Jussupow et al [
Social influence was not found to be an influential factor (both SIME and SIPA), which is contrary to several other studies that highlighted its importance for the adoption of new technologies [
This study indicated that a higher perception of the availability of FC had a negative impact on the acceptance of AI-powered care pathways, which is contrary to previous studies [
However, narrowing in on the characteristics of the specific respondent groups reveals a nonrandom sample in terms of different medical professions (
To our knowledge, this is the first study assessing the predictors of acceptance of AI-based technologies among medical professionals, thereby contributing to a poorly understood but increasingly relevant research area. A strength of this research was that it succeeded to identify significant relationships that influence acceptance. Another strength was the successful utilization of the UTAUT model and extensions of the model. This creates a foundation for future research in the acceptance of AI-based technologies. Furthermore, the quantitative nature of this study allows for more generalizable results and facilitates comparisons with future studies.
Some limitations were present in this study. Selection bias was unavoidable since the respondents participated voluntarily on the internet, which might have resulted in more individuals with an enthusiasm and interest about AI in health care. The selection bias could have been increased by the recruitment via the physicians’ network. This might have resulted in more positive results since this network contains a lot of medical professionals with an interest in health technology.
The results revealed approximately 40% responses with missing values. Most health care professionals stopped the survey at the information page about AI-powered care pathways. This page required some reading and thus some effort to learn about AI-powered care pathways. Even though efforts made for the information provided about the AI-powered care pathways were succinct, the health care professionals may not have had the time to read these materials owing to the increased work pressure they experienced during the COVID-19 pandemic. Future iterations of this study should also interrogate respondents about the modalities through which they would be most successfully informed about these technologies when they are implemented. Visual or video materials might be more helpful when engaging with very busy professionals.
Furthermore, AI-powered care pathways are in the beginning of the implementation phase and therefore did not include the actual use behavior of AI-powered care pathways. This study could therefore not show if the acceptance is valid for predicting the actual use behavior.
Last, the used measures should be tested regarding their psychometric properties. Even though the constructs used in this study were mainly based upon validated models, the usefulness in the context of AI-powered care pathways needs to be further investigated. In addition, new constructs were added and some constructs were adjusted, which requires further investigation. The internal consistency of the constructs was tested with the Cronbach
This research should function as a foundation for future longitudinal research. Future research could identify if acceptance differs over adoption steps and when more awareness about the technology is present. This study was conducted in quite a premature stage where actual use is still limited.
Furthermore, future research should identify if the used model is applicable in different health care systems or in other regions of the world. Since this research was conducted in the Netherlands and included all type of medical organizations, variations between organizational cultures, differences in professional identity, and the difference in public opinion about AI were not taken into account. Insight into these differences could help develop adequate implementation strategies per region and organization.
Adaptations were made to make the model fit better to the research aim. Future studies should focus on further validating the model in the context of AI-based technologies, especially the construct with poor internal consistency.
Since performance expectancy was found as the strongest predictor for the acceptance of AI-powered care pathways, this should be high priority during implementation of AI-based health technologies. The added value of these technologies should be clearly communicated to the end users. PT was the second most influencing variable for the acceptance of AI-powered care pathways. Strategies on how to increase trust in AI-based technologies should therefore be formulated for successful adoption in health care. Even though trust is found to be an important facilitator for acceptance, future research should not only focus on how to increase trust but also what effect this trust has on the actual use, since studies found that people tend to overtrust and misinterpret the outcomes of AI-based decision support [
The quantitative nature of this study did not allow us to understand the medical professionals’ reasoning underlying the found outcomes. Future qualitative studies are therefore recommended to understand how specific personality traits, the amount of understanding of AI-powered care pathways, or other contextual factors influence the acceptance of AI-based technologies.
This study sheds light on what factors have the largest impact on the acceptance of AI-powered care pathways among hospital staff and medical professionals. The model explained 75.4% of the variance in the behavioral intention. MEPE, NMPE, EE, PT, and PI were found to significantly influence behavioral intention where medical performance expectancy was found to have the largest impact. The moderator gender was found to significantly influence the relationship between facilitating conditions and acceptance. Since this study was conducted among Dutch medical professionals over a limited period of time and at a stage where the implementation of these technologies is still limited, follow-up surveys and multinational studies could further explore the predictors of acceptance of AI-powered care pathways over time and in different context.
Survey items with the corresponding item sources.
Graphs and tables for assumption testing multiple linear regression.
Pearson correlations between the variables.
Results from the multiple linear regression indicating the relationship between the predictor variables and the behavioral intention to use AI-powered care pathways.
Hayes’ PROCESS regression matrix for the moderating effects on the relationships between the predictor variables and the behavioral intention to use AI-powered care pathways. The Coefficient, standard error and P-value of the interaction terms are shown.
Cross Table for gender x profession.
artificial intelligence
anxiety
behavioral intention
effort expectancy
facilitating condition
innovativeness
medical performance expectancy
nonmedical performance expectancy
professional identity
perceived trust
social influence
social influence medical experts
social influence patients
Unified Theory of Acceptance and Use of Technology
We would like to thank Mirjam van der Steen and Katinka de Korte for their support during the study. We would also like to thank all the respondents who participated in our research.
VvB, LW, and DH are employed by DEARhealth, The Netherlands.