Original Paper
Abstract
Background: Artificial intelligence (AI) has the potential to address growing logistical and economic pressures on the health care system by reducing risk, increasing productivity, and improving patient safety; however, implementing digital health technologies can be disruptive. Workforce perception is a powerful indicator of technology use and acceptance, however, there is little research available on the perceptions of allied health professionals (AHPs) toward AI in health care.
Objective: This study aimed to explore AHP perceptions of AI and the opportunities and challenges for its use in health care delivery.
Methods: A cross-sectional survey was conducted at a health service in, Queensland, Australia, using the Shinners Artificial Intelligence Perception tool.
Results: A total of 231 (22.1%) participants from 11 AHPs responded to the survey. Participants were mostly younger than 40 years (157/231, 67.9%), female (189/231, 81.8%), working in a clinical role (196/231, 84.8%) with a median of 10 years’ experience in their profession. Most participants had not used AI (185/231, 80.1%), had little to no knowledge about AI (201/231, 87%), and reported workforce knowledge and skill as the greatest challenges to incorporating AI in health care (178/231, 77.1%). Age (P=.01), profession (P=.009), and AI knowledge (P=.02) were strong predictors of the perceived professional impact of AI. AHPs generally felt unprepared for the implementation of AI in health care, with concerns about a lack of workforce knowledge on AI and losing valued tasks to AI. Prior use of AI (P=.02) and years of experience as a health care professional (P=.02) were significant predictors of perceived preparedness for AI. Most participants had not received education on AI (190/231, 82.3%) and desired training (170/231, 73.6%) and believed AI would improve health care. Ideas and opportunities suggested for the use of AI within the allied health setting were predominantly nonclinical, administrative, and to support patient assessment tasks, with a view to improving efficiencies and increasing clinical time for direct patient care.
Conclusions: Education and experience with AI are needed in health care to support its implementation across allied health, the second largest workforce in health. Industry and academic partnerships with clinicians should not be limited to AHPs with high AI literacy as clinicians across all knowledge levels can identify many opportunities for AI in health care.
doi:10.2196/57204
Keywords
Introduction
Artificial intelligence (AI) has been hailed as a solution to address the growing logistical and economic pressures on the health care system due to an aging population, rising chronic disease burden, and workforce shortages [
, ]. AI is a term used to describe a large and growing range of computer functions that can “learn” from data to make better decisions over time, such as machine learning, natural language processing, and computer vision [ ]. The potential of AI in health care lies in its ability to analyze unstructured data, detect abnormalities, provide correlations, and automate or assist with some human tasks [ ]. Although the implementation of digital health including AI technologies can initially be disruptive [ , ] it may be useful to improve work productivity and clinical workflow, reduce risk and error, and augment clinical decision-making, which would ultimately improve patient safety and outcomes such as document summarization [ - ].Until recently, research about AI has been disproportionally focused on the merits of the technology, while investigation into workforce readiness and preparation for this new generation of technology is limited [
, ]. Lessons from previous industrial revolutions show that successful technology implementation is directly dependent on understanding and acknowledging the social dimensions of the human-technology relationship [ ]. The more complex the technology and the setting such as AI in health care, the less likely it is to be successfully adopted by the people intended to use it [ , ]. It is anticipated that in the future, digital health will be practiced by the same health care professionals who currently deliver traditional care. It is not yet known what the full impact of digital technology will be on the health care industry [ ]. It is essential to understand and address the perceptions of health care professionals toward AI, as negative attitudes can lead to health technology abandonment, nonadoption, and misuse, which ultimately negatively impacts efforts to improve patient safety and quality of care [ , , ].Advancing digital capability in allied health (AH) is one strategy used to address the increasing demands on a public health service in Queensland, Australia [
, ]. Technology implementation improves when health care professionals understand the purpose of the technology, how it is used, and how relevant it is to their role [ ]. In addition, the ease of implementation and ultimately the acceptance of technology relies on early and sustained user engagement [ , ]. Committed to maintaining and advancing further digital capability [ ], our digital health service identified a need to understand allied health professionals’ (AHPs) perceptions of AI to help prepare staff for future AI implementations. Workforce perception is a powerful indicator of organizational readiness and is a predictor of technology use and acceptance [ , ]. Research on health care professionals’ perceptions of AI is emerging as organizations seek to understand workforce readiness [ , - ]. Currently, most of this research has explored the perceptions of medical professionals and nurses [ , , , ]. AH literature has focused on professions such as radiology [ , ], medical imaging [ , ], and pharmacy [ - ] with some research on physiotherapy [ , ] and audiology [ ] that may reflect more advanced stages of AI adoption in those professions. While these studies provide valuable insights into how these professions can be supported in using and implementing AI, few studies capture the perceptions of a variety of AHPs to obtain insights into their perceptions and readiness and how they compare with each other [ , ]. No internationally agreed definition of AH exists, yet the core functions and types of AH roles are similar between countries, although there may be some differences in education and the scope of practice [ ]. Australian AHPs are university-qualified practitioners with accredited, specialized expertise working within a set scope of practice to prevent, diagnose, and treat a range of conditions and illnesses [ - ]. AH is the second largest health workforce in Australia consisting of at least sixteen diverse professions such as physiotherapy, pharmacy, occupational therapy, speech pathology, and dietetics among others [ - ].The professional skills, knowledge, work practices, and patient contact are heterogenous for AHPs [
], which differ from the homogenous attributes described for nursing [ ] and medical professionals [ , ]. This highlights the need to better understand the perception of AHPs as a group while also exploring any differences between AH professions. There is great potential for AI technology in many AH professions, with emerging technologies such as clinical decision support, wearable technologies, adverse drug reaction, drug interaction detection, and providing health information and advice [ , , ]. However, a human-centered understanding of the AHP workforce characteristics and perceptions is needed to ensure successful implementation and adoption of AI technology [ - ].To investigate the perceptions of AHP on AI per its increased role in the workplace we used the Shinners Artificial Intelligence Perception (SHAIP) [
] tool. SHAIP was designed in 2019 due to an Australian e-Delphi study [ ], which gathered the opinions of an interdisciplinary panel of experts in health and technology. It is underpinned by the sociotechnical systems theory, which acknowledges the complex relationships between the individual, the technology, and the workplace and supports the belief that organizations need a human-centered understanding of workforce characteristics and perceptions when implementing new technology [ ]. The tool was tested and validated in 2021 [ ]. The SHAIP tool is a 10-item, 2-factor tool that measures health care professionals’ perceptions of the professional impact of AI and preparedness for AI [ ]. Literature using the SHAIP tool [ ] is emerging worldwide, however only a small number of AHPs have been represented in the results [ , ]. Knowledge of AHPs’ perceptions of AI and the opportunities they envision for their disciplines will be valuable to both organizations and industry, facilitating research targeted toward AI interventions that create clinical efficiencies in the AH setting.To our knowledge, this project is the first of its kind to exclusively investigate AHP perceptions of AI and the opportunities and challenges for its use in health care delivery.
Methods
Study Design
A cross-sectional survey that included SHAIP was conducted in mid-2023 at Gold Coast, Queensland, Australia, with AHPs employed across one large tertiary hospital and health service. The STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guidelines were followed to report our study’s findings [
]. The survey study was part of a larger project that also incorporated qualitative data collection through focus groups with AH clinicians and managers, a paper entitled “Overcoming barriers and enabling artificial intelligence adoption in allied health clinical practice: a qualitative study” is under preparation to be published elsewhere.Ethical Considerations
Ethics approval was granted by the Gold Coast Health Human Research Ethics Committee (HREC/2023/QGC/96821). Participant consent was provided via acceptance of a consent statement on the opening page of the survey. The survey was anonymous, with an option for participants to provide contact details to participate in further parts of the project, which was kept separate from the collected data. Any identifiable data was removed from collected data and stored on a limited access drive, separate from this study’s data that is only accessible by research team members. No compensation or reimbursement was offered to participate in this study.
Study Setting and Participants
The Gold Coast Hospital and Health Service (GCHHS) employs approximately 10,000 staff to deliver public health care services to a general population of over 630,000 [
- ]. Approximately 1200 AHPs are employed at GCHHS across two tertiary hospitals, one day surgery hospital, health precincts, and community health services [ ]. Eleven AH departments support 16 AH professions [ ] and were listed in the data collection tool, with an “other” option available to AHP not represented by these areas. All GCHHS AHPs were eligible to participate in this study [ ].Study Recruitment
An online survey using Microsoft Forms that included the SHAIP tool was disseminated to AHPs at GCHHS via email. A link to this study was also advertised via online broadcasts, posters, and staff meetings. The survey was open for a period of seven weeks from May 17 to July 6, 2023.
Study Measures
This study design was adapted from the study conducted by Shinners et al [
] with the addition of locally developed questions. The survey was piloted on 5 AHPs to test face and content validity, which resulted in minor changes to wording [ , ]. Participant demographics were collected, including age group, sex, facility (Gold Coast University Hospital, Robina Hospital, Varsity Lakes Day Hospital, health precinct, or community health), AH profession (audiology, clinical measurements, dietetics, medical imaging, occupational therapy, pharmacy, physiotherapy, podiatry, psychology, speech pathology, or social work), role (clinical informatics or technology, clinician, educator or clinical facilitator, governance, manager, researcher, or academic), years of experience in profession, years of experience using the “integrated electronic medical record,” AI knowledge (no knowledge, beginner understanding, intermediate understanding, or advanced understanding), and previous use of AI (yes, no, or unsure). Participants were asked to complete the 10-item, 2-factor SHAIP tool [ ] that measures health care professionals’ perceptions of factor one: professional impact of AI, and factor two: preparedness for AI. These questions sought participant agreement along a five-point Likert scale rating (1 totally agree to 5 totally disagree) with a neutral point of 3 “unsure.”Finally, participants indicated any prior AI education they had received (none, self-initiated online course, webinar, conference, workplace training, or formal university qualification), if they would like to receive AI education (yes, no, or unsure), and what type of AI education they would like to receive (general teaching about AI capabilities, the application of AI in health care, or the ethics of AI in health care) from a list adapted from Shinners et al [
]. Participants were also asked to identify challenges that exist for AI implementation (infrastructure, interoperability with current systems, cost to implement, workforce knowledge and skills, organizational support, interdisciplinary collaboration, clinical governance, research funding, change fatigue, workforce resilience, or “I don’t know”). Using open-ended questions, participants were asked to describe their understanding of AI, and ideas or opportunities for AI that could be developed or implemented in their current practice.Data Analysis
Completed responses were imported into Stata (version 17; StataCorp LLC). Demographic data, including age group, sex, and years of experience, were collected. Descriptively, the responses to each Likert scale question were presented as a median and IQR. Confirmatory factor analysis (CFA) was conducted to verify the 2-factor model reported by Shinners et al [
] that identified factor one: professional impact of AI, and factor two: preparedness for AI as factors. A structural equation modelling approach was used to conduct the CFA and standardized coefficients (loading factors) were produced. We initially used all 10 questions in the SHAIP tool with questions 1 to 5 and 7 expected to load onto factor one and questions 6, 8, 9, and 10 to load onto factor two as suggested by Shinners et al [ ]. Model goodness-of-fit was initially assessed by observation of individual question coefficients, their P values and R2 values. Global model goodness-of-fit measures considered were the chi-square test (model vs saturated; though considered over sensitive in large sample sizes), the comparative fit index (CFI) and Tucker-Lewis index (TLI), the root-mean-square error of approximation (RMSEA), and the standardized root-mean-squared residual (SRMR). Values >0.90 for the CFI and TLI were considered indicators of a good fit. Values of 0.05 to 0.08 for RMSEA and 0.05 to 0.10 for SRMR were considered indications of acceptable fit. A test of acceptable fit based on the RMSEA, the PCLOSE test, was also performed. We also used modification indices (estat minidices command of Stata) to identify possible model respecifications to improve fit.Factor scores for the professional impact of AI and preparedness for AI factors were calculated for each individual based on the final structural equation modeling model. These factor scores are similar to z scores with 0 as the mean and values being the number of SDs from the mean with more negative values indicating more improvement or agreement. Simple factor scores equal to the mean Likert scale value of the questions contributing to the factor were also calculated. Cronbach α was calculated to estimate the internal consistency of each factor given the loading questions and structural equation modelling model.
Linear regression was used to identify predictors of each of the 2 factors (as factor scores) identified via CFA: professional impact of AI and preparedness for AI. Potential predictor variables considered were AH profession, age group, sex, AI knowledge, current use of AI, and years of experience in their profession. Initially, variables were considered in isolation and included in a multivariable model if there was evidence of an effect on professional impact of AI or preparedness for AI (P<.10). Variables were retained in the model if P<.05. Different models were constructed using factor scores or simple Likert scale values. The effects of predictor variables are presented as adjusted mean factor scores (AMFS) or approximate adjusted mean Likert scale (AAMLS) values. Differences between AH professions concerning professional impact of AI and preparedness for AI were further investigated by identifying professions that differed from the grand mean after adjustment for multiple comparisons (P<.05) by the Sidak method. Pairwise comparisons between each profession were also performed by 1-way ANOVA and the post hoc Fisher–Hayter pairwise comparisons procedure, which adjusts for multiple comparisons by calculating the critical value of the studentized range (CVSR) above which differences have a P<.05.
Open-ended questions defining AI were deductively grouped into four categories using NVivo (QSR International) according to the framework described in Shinners et al [
]. Categories were then iteratively revised by the research team and category definitions refined.Inductive content analysis was used to group responses to open-ended questions on ideas or opportunities for AI that could be developed or implemented in current practice. The major categories and subcategories identified were refined by the research team to reach a consensus.
Results
Participant Demographics
GCHHS human resources management confirmed that 1045 AHPs were employed at GCHHS at the time of this study. A total of 245 participants completed the survey. Fourteen responses were removed: 8 due to incomplete data (participants had not answered any questions) and 6 as participants were not AHPs. A final 231 responses remained representing 22.1% (231/1045) of the total population. Respondents were predominantly younger than 40 years (157/231, 67.9%), female (189/231, 81.8%), primarily working in a clinical role (196/231, 84.8%), with a median of 10 years (IQR 6 to 17) experience in their discipline, and working mostly at one hospital (186/231, 80.5%;
). Eleven AH professions were represented in the data, with the majority being pharmacists (46/231, 19.9%), physiotherapists (39/231, 16.9%), and occupational therapists (38/231, 16.5%). The AH departments with the highest response rate were audiology (5/8, 62.5% of all audiologists) and dietetics (29/64, 45.3% of all dietitians). Conversely, only 8.5% (11/128) of medical imaging staff responded and no clinical measurement staff completed the survey. Most respondents reported they were not using AI in their current role (185/231, 80.1%) and most (201/231, 87.0%) rated their knowledge of AI as either beginner or having no knowledge. More than three-quarters of respondents believed workforce knowledge and skill were the greatest challenge to incorporating AI in health care (178/231, 77.1%), followed by infrastructure (141/231, 61.0%) and workforce resistance (119/231, 51.5%).Question and category | Participants, n (%) | ||
Age (years) | |||
18-30 | 58 (25.1) | ||
31-40 | 99 (42.9) | ||
41-50 | 43 (18.6) | ||
51-60 | 27 (11.7) | ||
61-70 | 4 (1.7) | ||
Sex | |||
Man | 40 (17.3) | ||
Woman | 189 (81.8) | ||
Prefer not to say | 2 (0.9) | ||
Another term | 0 (0) | ||
Profession | |||
Audiology | 5 (2.2) | ||
Clinical measurements | 0 (0) | ||
Dietetics | 29 (12.6) | ||
Medical imaging | 11 (4.8) | ||
Occupational therapy | 38 (16.5) | ||
Other or unknown | 5 (2.1) | ||
Pharmacy | 46 (19.9) | ||
Physiotherapy | 39 (16.9) | ||
Podiatry | 3 (1.3) | ||
Psychology | 11 (4.8) | ||
Social work | 27 (11.7) | ||
Speech therapy | 17 (7.4) | ||
Role | |||
Clinical informatics or technology | 5 (2.2) | ||
Clinician | 196 (84.8) | ||
Educator or clinical facilitator | 5 (2.2) | ||
Governance | 1 (0.4) | ||
Manager | 18 (7.8) | ||
Researcher academic | 4 (1.7) | ||
Unknown | 2 (0.9) | ||
Site (multiselect option) | |||
Gold Coast University Hospital | 186 (80.5) | ||
Robina | 93 (40.3) | ||
Varsity Lakes | 5 (2.2) | ||
Health precinct or community health | 37 (16) | ||
Other or unknown | 3 (1.3) |
About CFA
Initial CFA using the original SHAIP tool showed that question 10 “I believe that should AI technology make an error, full responsibility lies with the healthcare professional” had a low correlation (0.097, P=.26) with the preparedness for AI factor. The CFI was marginally above the acceptable criteria (0.905>0.90), the TLI was below the acceptable cutoff (0.87<0.95) and the RMSEA was 0.8, being at the margin of acceptable (PCLOSE test of RMSEA <0.05, P=.01), and chi-square P=3.5×10–6. Taken together, this suggested the model was not a good fit for the data sample.
A revised model was created in which question 10 was removed and a correlation between question 4 and question 7 was included in the model following suggestions from assessment of modification indices.
The revised, 9-item model was reanalyzed using CFA. In this model, CFI=0.966, TLI=0.955, RMSEA=0.048 (PCLOSE P=.52), SRMR=0.067, and chi-square P=.02. Although the chi-square remained significant, all other measures indicated a good fit. Cronbach α was used to determine the reliability of the two factors. A Cronbach α score >0.7 indicates good internal consistency however a Cronbach α of 0.5 or 0.6 can be used in some cases [
- ]. The professional impact of AI factor had a Cronbach α of 0.82 while the preparedness for AI factor had a lower reliability with Cronbach α=0.54, which is likely due to the small number of items contributing to the factor [ ].Factors Influencing Perceptions of AI Using the SHAIP Tool
Overview
Likert scale responses were analyzed for overall perceptions of AI for each question (
). This analysis showed that AHPs generally agreed with the statements relating to the use of AI in their specialty “could improve the delivery of patient care,” “improve clinical decision making,” “improve population health outcomes,” and that AI will “change my role as a healthcare professional in the future.” Participants were less confident about the statements relating to AI reducing “financial costs” and AI taking over “part of my role as a healthcare professional.” In contrast, participants disagreed with the statements “healthcare professionals are prepared for the introduction of AI technology,” and that there is an “ethical framework in place” for AI, or that “should AI technology make an error; full responsibility lies with the healthcare professional.” AHPs totally disagree with the statement that they are “adequately trained to use AI.”Participant responses were stronger when relating to the individual, and less sure if statements related to the overall profession. For example, 84.4% (195/231) of participants disagreed with the statement “I believe that I have been adequately trained to use AI that is specific to my role” (strongly disagree 132/231 or disagree 63/231) compared to 45.5% (105/231) being unsure about the statement “I believe overall healthcare professionals are prepared for the introduction of AI technology.”
Factor One: Professional Impact of AI
Multivariable linear regression identified that age group, profession, and AI knowledge were independent predictive factors for AHPs’ perception of the professional impact of AI. AHPs aged 51-60 years group were more likely to disagree that AI would affect their professional role (AMFS=0.24; AAMLS=2.92) than other age groups (AMFS=–0.032; AAMLS=2.56, P=.01). For profession, pharmacists were most likely to think AI would affect their professional role compared to other AH disciplines (AMFS=–0.314; AAMLS=2.22, P=.01), with physiotherapists (AMFS=0.215; AAMLS=2.90, P=.12) and social workers (AMFS=0.189; AAMLS=2.91, P=.42) being the least likely. ANOVA (unadjusted) using the Fisher-Hayter pairwise comparison post hoc analysis adjustment for multiple comparisons at the P<.05 significance level calculated the CVSR to be 4.60. Thus, occupational therapists (CVSR=5.56), physiotherapists (CVSR=7.97), and social workers (CVSR=7.40) were less likely than pharmacists (P<.05) to think AI would affect their professional role. No other differences with P<.05 were identified between professions.
Perceived AI knowledge was shown to influence a person’s agreement on the professional impact of AI (P=.02). People with advanced knowledge (AMFS=–0.488; AAMLS=1.94) were more likely to agree AI would have an impact compared to those with an intermediate understanding (AMFS=–0.086; AAMLS=2.58, P=.10), a beginner understanding (AMFS=–0.014; AAMLS=2.59, P=.04), or no knowledge (AMFS=0.166; AAMLS=2.76, P=.007).
Factor Two: Preparedness for AI
Overall, it was apparent from the mean Likert scale score (4.00) that very few AHPs felt prepared for AI (only 5 of 231, AAMLS≤3.00). Multivariable linear regression identified that current use of AI was an important independent predictive factor for preparedness for AI (P=.04). People who did not currently use AI were more likely to disagree that they were prepared for AI (AMFS=0.55; AAMLS=4.02) than people who were unsure that they were using AI (AMFS=–0.258; AAMLS=3.68, P=.02) or who confirmed they were currently using AI (AMFS=–0.140; AAMLS=3.68, P=.29). Years of experience in a profession was also a predictor for preparedness for AI (P=.02) such that the factor score increased by 0.012 (or approximately 0.009 of a Likert Scale) per year of experience. That is, more experienced AHP felt less prepared. No other predictive factors were identified for AI preparedness for AI as shown in Table S1 in
.Perceptions of Education
Participants indicated what type of AI education they had previously received and what type of education they required. Most participants indicated that they had not received any education on AI (190/231, 82.3%;
). As shown in most participants (194/231, 84%) perceived “application of AI” as the area of education most required. “Ethics of AI” (158/231, 68.4%) and “general teaching about AI” (156/231, 67.5%) demonstrated a similar rating, while “AI techniques” (131/231, 56.7%) was selected by more than half the participants.Question and category | Participants, n (%) | ||
How do you rate your understanding of AI? | |||
Advanced knowledge | 5 (2.2) | ||
Intermediate understanding | 25 (10.8) | ||
Beginner understanding | 160 (69.3) | ||
No knowledge | 41 (17.7) | ||
What education or training have you had about AI? | |||
None | 190 (82.3) | ||
Self-initiated online course, webinar, conference | 32 (13.9) | ||
Other | 11 (4.8) | ||
Workplace training | 5 (2.2) | ||
Formal university qualification | 2 (0.9) | ||
Would you like to receive education about AI? | |||
Yes | 170 (73.6) | ||
Unsure | 41 (17.7) | ||
No | 19 (8.2) | ||
Unknown | 1 (0.4) | ||
Which AI topics would you like to know more about? | |||
The application of artificial intelligence in healthcare | 194 (84) | ||
The ethics of artificial intelligence in healthcare | 158 (68.4) | ||
General teaching about artificial intelligence capabilities | 156 (67.5) | ||
Training on artificial intelligence techniques | 131 (56.7) | ||
Unknown | 20 (8.7) | ||
Other | 5 (2.2) | ||
How many years’ experience with ieMRb? | |||
0 | 3 (1.3) | ||
0-1 | 11 (4.8) | ||
1-2 | 26 (11.3) | ||
2-3 | 21 (9.1) | ||
3-4 | 42 (18.2) | ||
4+ | 126 (54.5) | ||
Unknown | 2 (0.9) | ||
In your current role are you using AI? | |||
Yes | 14 (6.1) | ||
Unsure | 32 (13.9) | ||
No | 185 (80.1) | ||
Unknown | 0 (0) | ||
Challenges to incorporating AI in your workplace? | |||
Infrastructure | 141 (61) | ||
Interoperability with current systems | 117 (50.6) | ||
Cost to implement | 118 (51.1) | ||
Workforce knowledge and skills | 178 (77.1) | ||
Organizational support | 89 (38.5) | ||
Interdisciplinary collaboration | 62 (26.8) | ||
Clinical governance | 112 (48.5) | ||
Research funding | 61 (26.4) | ||
Change fatigue | 83 (35.9) | ||
Workforce resistance | 119 (51.5) | ||
I don’t know | 19 (8.2) | ||
Other | 38 (16.5) |
aAI: artificial intelligence.
bieMR: integrated electronic medical records.
Understanding of AI
For the question “in your own words, what do you understand artificial intelligence to mean,” 21 participants either did not respond or indicated they did not know. The remaining 210 participants provided a total of 220 responses, which were allocated into one of four categories.
Almost half of the participants (96/220, 44%) defined AI as computers with intelligence. For example: “computer-generated intelligence,” “intelligence created by a machine as opposed to a human,” and “use of intelligent/intuitive technology.”
Over a third (79/220, 36%) of participants defined AI as equipment with nonhuman qualities. Statements included: “computer programs,” “using technology,” and “computer generated outcomes based on algorithms.”
Eleven percent (25/220) of responses defined AI as having human-like qualities with statements such as “computers being able to think for themselves,” “computers and software with the ability to think and learn independently,” and “an online personality.”
Four percent (9/220) defined AI as a robot with statements such as “robots and other assistive tech devices,” “robotics,” and “robots.”
Eleven responses were unable to be categorized into the framework. Such statements included “ability to imitate” and “AI may enhance our work in the future, however, without regulation the risks are high for incorrect information.”
Further analysis revealed a theme expressing concern about the loss of tasks or occupations due to the introduction of AI. Fourteen percent (30/220) described it as “basically a way to take away jobs from humans,” “taking human interaction or judgement out of the equation,” and “preprogrammed intelligence that has capacity to attempt to interact as a human replacement.”
Opportunities for AI in AH
Seventy participants responded to the open-ended question to describe ideas for AI opportunities, providing a total of 90 ideas. Responses were grouped into 6 categories. “Administrative” included opportunities for AI in staff rostering, referral management, correspondence, and documentation. “Clinical decision support” listed ideas in patient monitoring, screening, and assessment along with risk stratification and prioritization. “Medication management” identified medication reconciliation, adherence, and optimization tasks plus medication information solutions for patients. “Educational” opportunities included clinician simulation and training and novel patient education resources. “Treatment support” identified ideas for AI such as obtaining and synthesizing patient history collateral and wearable technologies improving access to health care. “Auditing and analytics” revealed opportunities in workplace auditing, improving workflow efficiencies, and analyzing population trends to predict demand.
Discussion
Principal Findings
This study explored AHP perceptions of AI in health care provision as well as the opportunities and challenges for its use in health care delivery in a large tertiary health service. The findings reveal that although the lack of AI knowledge and skills are the greatest barriers to AI implementation in health care, AHPs remain optimistic about the potential benefits of AI on health care and desire AI training and education. Key factors that influence AHP perceptions of AI were identified, such as the AH profession and the use of AI. Leveraging these factors could help inform future implementation strategies. Although most participants had limited AI knowledge, they were able to identify opportunities for AI in the delivery of health care. This study is the first to our knowledge that has exclusively explored AHP perceptions, providing unique insights that may inform future workforce readiness and education initiatives as well as guide further research and implementation strategies.
Factors Influencing Perceptions of AI
This study showed that age, profession, and AI knowledge are key predictors of perceptions about the professional impact of AI, while the use of AI and years of experience in the profession were predictors of perceptions about preparedness for AI.
A prior study identified age as an important variable influencing digital transformation in health care, reporting that younger health professionals thought it was too slow when compared to older participants [
]. Interestingly, this study found that AHPs aged 51 to 60 years were less likely to perceive that AI would affect their professional role compared to younger age groups. Similarly, one Australian study [ ] found that more experienced medical officers (>30 years of experience) were less likely to expect AI would impact their role in the coming decade [ ]. However, evidence regarding the influence of age on perceptions of AI in health care has been varied and inconclusive [ , , ]. Further, this study also found that more experienced AHPs perceived they were less prepared for AI, which has not been identified in prior studies [ , ]. The influence of age and experience on AHP perceptions may be explained by how imminent clinicians perceive AI in health care. The average age at retirement in Australia is 56.3 years, and people who are currently working intend to retire at 65.5 years of age [ ]. If AHPs aged between 51 and 60 years do not believe AI will be introduced in the next 10 years or more, they may intend to retire before they expect the digital revolution will occur; therefore, they will not be affected by AI in health care as an AHP.This is inconsistent with the global and domestic strategic planning to support the surge of AI and digital health technologies in health care currently rather than in 10 years [
, , , ]. Communicating expected timeframes of AI in health care may help AHPs of all ages and experience to be more aware of the likely immediacy of the change.AHP perceptions on the professional impact of AI varied based on individual AH professions This may be due to the extent that each AH profession uses technology to deliver health care. Radiology [
, ] and medical imaging [ , ] are data-rich professions, leading AI adoption in AH care and more likely to use data-driven innovation than other AHPs such as social work. Pharmacists in this study were more likely to perceive an impact on their professional role compared to occupational therapists, physiotherapists, and social workers. The emerging evidence relating to the application of AI in pharmacy [ , - ] could reflect the increasing awareness of the impact of AI on this profession, as captured in this study. Chalasani et al [ ] recently identified numerous applications for AI in pharmacy including adverse drug reaction detection, drug interaction identification, and dose recommendations. Varied perceptions may also be explained by the nature of the work conducted by each profession and the degree of direct and indirect patient care each profession provides. AHP skills are diverse, and the required knowledge, scope of practice, and competency standards are unique to each profession [ ]. As a result, AI implementation strategies should be informed by profession-specific research to develop tailored approaches for each AHP profession rather than a one-size-fits-all approach.Most of the participants in this study were not using AI in their current roles. The finding that AHPs who had used AI were more likely to feel prepared for AI when compared with those who had not used AI is consistent with other studies [
, ]. Chen et al [ ] found that those who had used AI in the clinical setting had a better understanding of AI and were more positive about its potential application in health care. Even so, only 10% to 30% of health professionals worldwide have used AI in clinical practice [ ]. Digital competence has been closely linked with professional confidence in previous studies exploring the introduction of new technology [ ]. Professionals and managers recognize that technology such as electronic health records or telemedicine demands increased learning and new skills, which require time and exposure [ ]. A lack of first-hand experience with AI can prevent AHPs from adapting and embracing AI in health provision. As the second largest workforce in health, AHPs have the potential to pioneer organizational change with AI implementation. Organizational efforts in digital transformation may be negatively affected if AHPs are not prepared adequately for AI adoption [ , , ].A lack of knowledge and first-hand experience with AI is emerging as a challenge for health professionals worldwide [
- , ]. This study found that AHPs with lower AI knowledge (no knowledge to intermediate level knowledge) were less likely to think there will be a professional impact of AI when compared to AHPs with advanced level knowledge. Literature shows that most health professionals have a lack of basic AI knowledge [ , ] and most health professionals reported a lack of direct, hands-on experience with AI [ , , ]. Education of AHPs about AI is urgently needed as a core implementation strategy for organizational adoption and preparedness [ ].Perceptions on AI’s Purpose, Education, and Understanding of AI
In this study, AHPs expressed some skepticism about the purpose of introducing AI in health care and concern that it will reduce employment or remove valued clinical tasks. This is consistent with the reported concerns of medical officers in prior studies: AI replacing clinicians, taking over clinical tasks, or reducing the reliance on medical specialist experience [
, , , ]. Studies show that health care professionals are generally aware AI will have organizational and professional impacts that they are not yet prepared for which may threaten to undermine the benefits of AI before its implementation [ , , - ]. Despite this, AHPs in this study remain optimistic about the potential benefits of AI such as improving health care, clinical decision-making, and delivery of patient care, consistent with other studies [ , , ]. These views are not dissimilar to those held in the previous digital health revolutions in which the rapid increase of the use of the internet and computers in health care delivery prompted an examination of the expectations, skills, and resources of users [ ]. As the health service that has undergone considerable digital change in recent years, it is unsurprising that staff express trepidation based on their learned experience of digital health and apply these to AI. Consideration should be given to openly acknowledge, address, and monitor the impact of the unintended consequences of future AI implementation to help counter AHPs’ pessimism and inspire interest in this innovation [ ].When asked to describe their understanding of AI, AHP described AI as computers with intelligence, equipment with nonhuman qualities, human-like qualities, or as a robot, as found in prior literature [
]. With the absence of an agreed-upon definition of AI, it is not possible to assess how correct these descriptions are; however, they do demonstrate that AHPs are trying to understand AI. It is clear that the primary challenge facing AHPs implementing AI is a lack of knowledge and skills in AI, consistent with other findings [ , ]. This, combined with AHPs desire to learn shows this is the perfect time to address knowledge deficits. Targeted AI education and training would be highly beneficial considering most AHPs in this study had little to no knowledge about AI and reported little education, training, or experience with AI. Most AHPs desired AI training and indicated a preference for education about the application of AI in the health care setting, followed by ethical considerations of its use and general AI knowledge, consistent with prior findings [ , ]. Unsurprisingly, AHPs feel inadequately prepared or trained for AI with the current lack of education or any competency framework available for health professionals to develop skills in digital health [ , ]. Incorporation of AI education into professional, tertiary, and workplace training is crucial to overcome the barriers that may otherwise limit the adoption of AI in health care [ , ]. Strategies identified to help address critical knowledge and experience deficits of clinicians include professional organization position statements, professional accreditation, digital literacy embedded in both undergraduate and postgraduate tertiary education, multidisciplinary team learning, and specialist digital health career pathways [ , , ].Opportunities for AI in AH
Our findings suggest that AHPs are well placed to contribute to the co-design of AI applications in clinical settings to increase the use of AI tools to improve patient and system outcomes. Indeed, approximately a third of respondents identified ideas for the application of AI in AH. Ideas are predominantly related to nonclinical, administrative tasks, and to support patient assessment to improve efficiencies and increase the clinical time for direct patient care. It is worth noting that ideas were generated by some clinicians with a lack of knowledge and experience with AI, but who were generally optimistic about the future impact of AI. This shows that industry and academic partnerships with clinicians should not be limited to engagement with those with high AI literacy as clinicians across all knowledge levels may still be able to identify relevant AI opportunities.
Limitations and Future Directions
A key strength of this study was the relatively high response rate (22.1%) of the survey leading to an estimated margin of error of 0.045 and broad representation across varied AHP professions. However, we still cannot be certain that the sample is representative of the whole AHP population.
Key limitations of this study include, first, responders, compared to nonresponders, may have had a specific interest in AI, either negative or positive, therefore AHPs that are not interested or aware of AI could be underrepresented. Second, the survey collection tool named eleven GCHHS AHP departments, which may have led to the lack of participation from unnamed professions, for example, music therapists, and limits the applicability of the findings to the broader range of AH professions. Third, clinical measurements and medical imaging [
, , ] clinicians were not well represented in the data despite efforts to engage and recruit; therefore, targeting these populations may be a consideration in future studies. Fourth, the conduct of the survey within a single hospital and health service study setting may limit the generalizability of the findings to AHPs in other settings. Fifth, the survey did not meet the ideal minimum Cronbach α for internal consistency for factor two: preparedness for AI, likely due to the small number of items contributing to the factor. Further research should explore the key barriers and enablers of implementing AI in health care from the AHP perspective, to inform AI implementation strategies and facilitate the adoption of AI.Clinical Implications
This study highlights that AHPs perceive they are unprepared for AI implementation within their health care setting. As the second largest workforce in health [
], the preparation of AHPs should be a priority given the rate at which AI is developing in the health care sector. This should include targeted education and training, along with first-hand experience with AI to maximize readiness for the coming widespread adoption and implementation of AI across health care.With a lack of external training providers and limited clinician time available outside official duty hours, health care organizations should consider how to mobilize the workforce to learn and use new AI technologies tailored to the different needs of the professional groups [
]. Organizations should consider collaborating with AHP and digital industry experts on identifying and exploring opportunities for AI in health care, regardless of digital knowledge and readiness.Conclusion
AH, the second largest workforce in health, has untapped potential to help pioneer AI implementation in health care. A lack of workforce AI knowledge or skills was identified as a potential key barrier to implementation. Targeted education, training, and hands-on experience with AI should be prioritized for AHP to support the implementation of the rapidly emerging digital revolution. Further research is required to more deeply understand the barriers and enablers of AI implementation from the perspective of the AHP to tailor education and inform workforce readiness strategies to drive change and lead innovation. Industry and academic partnerships with clinicians should not be limited to AHPs with high AI literacy as clinicians across knowledge levels can identify many opportunities for AI in health care.
Acknowledgments
The authors would like to thank AH staff at Queensland Health GCHHS for their participation in this study and GCHHS Allied Health Research for providing funding to complete this research.
Data Availability
The datasets generated or analyzed during this study are available from the corresponding author upon reasonable request.
Authors' Contributions
JH, LH, and RW conceptualized this study and developed the methodology. LH, RW, LS, RLA, and BR researched literature, developed the protocol, and gained ethical approval. LH, RW, RLA, and JH were involved in participant recruitment and data collection. JH, LS, and IH conducted data analysis. JH drafted this paper. All authors reviewed, edited, and approved the final version of this paper.
Conflicts of Interest
None declared.
Univariable linear regressions: allied health professional perceptions of professional impact of AI and preparedness for AI (N=231) table. AI: artificial intelligence.
DOCX File , 23 KBReferences
- Allied health workforce data gap analysis, issues paper. Australian Government Department of Health and Aged Care. 2022. URL: https://www.health.gov.au/resources/publications/allied-health-workforce-data-gap-analysis-issues-paper?language=en [accessed 2023-06-10]
- Topol EJ. The Topol Review: Preparing the Healthcare Workforce to Deliver the Digital Future. England. Health Education; 2019.
- Russell S. Artificial Intelligence: A Modern Approach. 3rd ed. New Jersey. Pearson; 2016.
- Powles J, Hodson H. Google DeepMind and healthcare in an age of algorithms. Health Technol (Berl). 2017;7(4):351-367. [FREE Full text] [CrossRef] [Medline]
- Chomutare T, Tejedor M, Svenning TO, Marco-Ruiz L, Tayefi M, Lind K, et al. Artificial intelligence implementation in healthcare: a theory-based scoping review of barriers and facilitators. Int J Environ Res Public Health. 2022;19(23):16359. [FREE Full text] [CrossRef] [Medline]
- Balch JA, Loftus TJ. Actionable artificial intelligence: overcoming barriers to adoption of prediction tools. Surgery. 2023;174(3):730-732. [CrossRef] [Medline]
- Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94-98. [FREE Full text] [CrossRef] [Medline]
- World Health Organisation: Global strategy on digital health 2020 - 2025. Geneva. World Health Organisation; 2021. URL: https://www.who.int/docs/default-source/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf [accessed 2024-11-29]
- Klarenbeek SE, Schuurbiers-Siebers OCJ, van den Heuvel MM, Prokop M, Tummers M. Barriers and facilitators for implementation of a computerized clinical decision support system in lung cancer multidisciplinary team meetings-a qualitative assessment. Biology (Basel). 2020;10(1):9. [FREE Full text] [CrossRef] [Medline]
- Alami H, Lehoux P, Auclair Y, de Guise M, Gagnon M, Shaw J, et al. Artificial intelligence and health technology assessment: anticipating a new level of complexity. J Med Internet Res. 2020;22(7):e17707. [FREE Full text] [CrossRef] [Medline]
- Goldsack J, Zanetti C. Defining and developing the workforce needed for success in the digital era of medicine. Digit Biomark. 2020;4(Suppl 1):136-142. [FREE Full text] [CrossRef] [Medline]
- Hanseth O, Monteiro E. Changing irreversible networks. Institutionalisation and infrastructure. 1997:1-15.
- Cresswell K, Sheikh A. Organizational issues in the implementation and adoption of health information technology innovations: an interpretative review. Int J Med Inform. 2013;82(5):e73-e86. [CrossRef] [Medline]
- Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A'Court C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res. 2017;19(11):e367. [FREE Full text] [CrossRef] [Medline]
- Ćwiklicki M, Klich J, Chen J. The adaptiveness of the healthcare system to the fourth industrial revolution: a preliminary analysis. Futures. 2020;122:102602. [CrossRef]
- Lambert SI, Madi M, Sopka S, Lenes A, Stange H, Buszello C, et al. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. npj Digit Med. 2023;6(1):111. [CrossRef] [Medline]
- Winter PD, Chico TJA. Using the non-adoption, abandonment, scale-up, spread, and sustainability (NASSS) framework to identify barriers and facilitators for the implementation of digital twins in cardiovascular medicine. Sensors (Basel). 2023;23(14):6333. [FREE Full text] [CrossRef] [Medline]
- Gold Coast Health Digital Strategic Plan and Roadmap 2021-2024. Southport. Queensland Health; 2022.
- Allied Health Digital Transformation Roadmap 2023-2033. In: Office of the Chief Allied Health Officer. Australia. Queensland Health; 2023.
- Cresswell KM, Lee L, Mozaffar H, Williams R, Sheikh A, NIHR ePrescribing Programme Team. Sustained user engagement in health information technology: the long road from implementation to system optimization of computerized physician order entry and clinical decision support systems for prescribing in hospitals in England. Health Serv Res. 2017;52(5):1928-1957. [FREE Full text] [CrossRef] [Medline]
- Hogan J, Grant G, Kelly F, O'Hare J. Factors influencing acceptance of robotics in hospital pharmacy: a longitudinal study using the extended technology acceptance model. Int J Pharm Pract. 2020;28(5):483-490. [CrossRef] [Medline]
- Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA. 2016;316(22):2353-2354. [CrossRef] [Medline]
- Catalina QM, Fuster-Casanovas A, Vidal-Alaball J, Escalé-Besa A, Marin-Gomez FX, Femenia J, et al. Knowledge and perception of primary care healthcare professionals on the use of artificial intelligence as a healthcare tool. Digit Health. 2023;9:20552076231180511. [FREE Full text] [CrossRef] [Medline]
- Shinners L, Aggar C, Stephens A, Grace S. Healthcare professionals' experiences and perceptions of artificial intelligence in regional and rural health districts in Australia. Aust J Rural Health. 2023;31(6):1203-1213. [CrossRef] [Medline]
- Chen M, Zhang B, Cai Z, Seery S, Gonzalez MJ, Ali NM, et al. Acceptance of clinical artificial intelligence among physicians and medical students: a systematic review with cross-sectional survey. Front Med (Lausanne). 2022;9:990604. [FREE Full text] [CrossRef] [Medline]
- Laï MC, Brian M, Mamzer MF. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med. 2020;18(1):14. [FREE Full text] [CrossRef] [Medline]
- Alelyani M, Alamri S, Alqahtani MS, Musa A, Almater H, Alqahtani N, et al. Radiology community attitude in Saudi Arabia about the applications of artificial intelligence in radiology. Healthcare (Basel). 2021;9(7):834. [FREE Full text] [CrossRef] [Medline]
- Ng CT, Roslan SNA, Chng YH, Choong DAW, Chong AJL, Tay YX, et al. Singapore radiographers' perceptions and expectations of artificial intelligence - a qualitative study. J Med Imaging Radiat Sci. 2022;53(4):554-563. [CrossRef] [Medline]
- Abuzaid M, Tekin H, Reza M, Elhag I, Elshami W. Assessment of MRI technologists in acceptance and willingness to integrate artificial intelligence into practice. Radiography (Lond). 2021;27 Suppl 1:S83-S87. [CrossRef] [Medline]
- Botwe B, Akudjedu T, Antwi W, Rockson P, Mkoloma S, Balogun E, et al. The integration of artificial intelligence in medical imaging practice: perspectives of African radiographers. Radiography (Lond). 2021;27(3):861-866. [CrossRef] [Medline]
- Aldughayfiq B, Sampalli S. Patients', pharmacists', and prescribers' attitude toward using blockchain and machine learning in a proposed ePrescription system: online survey. JAMIA Open. 2022;5(1):ooab115. [FREE Full text] [CrossRef] [Medline]
- Balestra M, Chen J, Iturrate E, Aphinyanaphongs Y, Nov O. Predicting inpatient pharmacy order interventions using provider action data. JAMIA Open. 2021;4(3):ooab083. [FREE Full text] [CrossRef] [Medline]
- Hogue S, Chen F, Brassard G, Lebel D, Bussières JF, Durand A, et al. Pharmacists' perceptions of a machine learning model for the identification of atypical medication orders. J Am Med Inform Assoc. 2021;28(8):1712-1718. [FREE Full text] [CrossRef] [Medline]
- Chalasani SH, Syed J, Ramesh M, Patil V, Pramod Kumar T. Artificial intelligence in the field of pharmacy practice: a literature review. Explor Res Clin Soc Pharm. 2023;12:100346. [FREE Full text] [CrossRef] [Medline]
- Alsobhi M, Sachdev HS, Chevidikunnan MF, Basuodan R, Khan F. Facilitators and barriers of artificial intelligence applications in rehabilitation: a mixed-method approach. Int J Environ Res Public Health. 2022;19(23):15919. [FREE Full text] [CrossRef] [Medline]
- Alsobhi M, Khan F, Chevidikunnan MF, Basuodan R, Shawli L, Neamatallah Z. Physical therapists' knowledge and attitudes regarding artificial intelligence applications in health care and rehabilitation: cross-sectional study. J Med Internet Res. 2022;24(10):e39565. [FREE Full text] [CrossRef] [Medline]
- Barbour DL, Howard RT, Song XD, Metzger N, Sukesan KA, DiLorenzo JC, et al. Online machine learning audiometry. Ear Hear. 2019;40(4):918-926. [FREE Full text] [CrossRef] [Medline]
- Turnbull C, Grimmer-Somers K, Kumar S, May E, Law D, Ashworth E. Allied, scientific and complementary health professionals: a new model for Australian allied health. Aust Health Rev. 2009;33(1):27-37. [CrossRef] [Medline]
- Australian Government, Department of Health and Aged Care. About allied health care. 2024. URL: https://www.health.gov.au/topics/allied-health/about [accessed 2024-05-17]
- Australian Health Practitioner Regulation Agency. Professions and divisions. 2024. URL: https://www.ahpra.gov.au/Registration/Registers-of-Practitioners/Professions-and-Divisions.aspx [accessed 2024-05-17]
- Angus RL, Hattingh HL, Weir KA. The health service perspective on determinants of success in allied health student research project collaborations: a qualitative study guided by the consolidated framework for implementation research. BMC Health Serv Res. 2024;24(1):143. [FREE Full text] [CrossRef] [Medline]
- Australian Government, Australian Institute of Health and Welfare, Health workforce. 2022. URL: https://www.aihw.gov.au/reports/workforce/health-workforce [accessed 2023-11-29]
- Queensland Government. Queensland Health careers: allied health requirements to practice. URL: https://www.careers.health.qld.gov.au/allied-health-careers/registration-requirements [accessed 2024-05-22]
- VanGeest J, Beebe TJ, Johnson TP. Surveys of physicians. In: Handbook of health survey methods. Hoboken, New Jersey. John Wiley & Sons, Inc; 2015:515-543.
- Kidd JC, Colley S, Dennis S. Surveying allied health professionals within a public health service: what works best, paper or online? Eval Health Prof. 2021;44(3):226-234. [CrossRef] [Medline]
- VanGeest J, Johnson TP. Surveying nurses: identifying strategies to improve participation. Eval Health Prof. 2011;34(4):487-511. [CrossRef] [Medline]
- Ravali RS, Vijayakumar TM, Santhana Lakshmi K, Mavaluru D, Reddy LV, Retnadhas M, et al. A systematic review of artificial intelligence for pediatric physiotherapy practice: past, present, and future. Neuroscience Informatics. 2022;2(4):100045. [CrossRef]
- Morrow E, Zidaru T, Ross F, Mason C, Patel KD, Ream M, et al. Artificial intelligence technologies and compassion in healthcare: a systematic scoping review. Front Psychol. 2023;13:971044. [FREE Full text] [CrossRef] [Medline]
- Adaba GB, Kebebew Y. Improving a health information system for real-time data entries: an action research project using socio-technical systems theory. Inform Health Soc Care. 2018;43(2):159-171. [CrossRef] [Medline]
- Appelbaum SH. Socio‐technical systems theory: an intervention strategy for organizational development. Manag Decis. 1997;35(6):452-463. [CrossRef]
- Clark D, Dean G, Bolton S, Beeson B. Bench to bedside: the technology adoption pathway in healthcare. Health Technol. 2020;10(2):537-545. [CrossRef]
- Shinners L, Grace S, Smith S, Stephens A, Aggar C. Exploring healthcare professionals' perceptions of artificial intelligence: piloting the Shinners Artificial Intelligence Perception tool. Digit Health. 2022;8:20552076221078110. [FREE Full text] [CrossRef] [Medline]
- Shinners L, Aggar C, Grace S, Smith S. Exploring healthcare professionals' perceptions of artificial intelligence: validating a questionnaire using the e-Delphi method. Digit Health. 2021;7:20552076211003433. [FREE Full text] [CrossRef] [Medline]
- Trist EL. The Evolution of Socio-Technical Systems: A Conceptual Framework and an Action Research Program. Toronto. Ontario Ministry of Labour, Ontario Quality of Working Life Centre; 1981.
- von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, et al. STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet. 2007;370(9596):1453-1457. [FREE Full text] [CrossRef] [Medline]
- Australian Bureau of Statistics. Gold Coast 2021 Census all persons QuickStats. 2023. URL: https://www.abs.gov.au/census/find-census-data/quickstats/2021/309 [accessed 2023-05-12]
- Gold Coast City Council. City of Gold Coast population data. 2023. URL: https://www.goldcoast.qld.gov.au/Council-region/About-our-city/Population-data [accessed 2023-05-12]
- Queensland Health, State of Queensland (Gold Coast Hospital and Health Service) Annual Report 2022-2023. 2023. URL: https://www.goldcoast.health.qld.gov.au/about-us/news/gold-coast-health-annual-report-highlights-2022-2023 [accessed 2023-12-07]
- Queensland Health. Allied Health Workforce [intranet site]. Office of the Chief Allied Health Officer. 2023. URL: https://www.health.qld.gov.au/ahwac [accessed 2023-06-12]
- Smith F. Health services research methods in pharmacy: survey research: (2) survey instruments, reliability and validity. Int J Pharm Pract. 2011;5(4):216-226.
- Kelley K, Clark B, Brown V, Sitzia J. Good practice in the conduct and reporting of survey research. Int J Qual Health Care. 2003;15(3):261-266. [CrossRef] [Medline]
- Tavakol M, Dennick R. Making sense of Cronbach's alpha. Int J Med Educ. 2011;2:53-55. [FREE Full text] [CrossRef] [Medline]
- Peterson RA, Kim Y. On the relationship between coefficient alpha and composite reliability. J Appl Psychol. 2013;98(1):194-198. [CrossRef] [Medline]
- Ware JE, Gandek B. Methods for testing data quality, scaling assumptions, and reliability: the IQOLA project approach. International quality of life assessment. J Clin Epidemiol. 1998;51(11):945-952. [CrossRef] [Medline]
- Baudin K, Gustafsson C, Frennert S. Views of Swedish elder care personnel on ongoing digital transformation: cross-sectional study. J Med Internet Res. 2020;22(6):e15450. [FREE Full text] [CrossRef] [Medline]
- Scheetz J, Rothschild P, McGuinness M, Hadoux X, Soyer HP, Janda M, et al. A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci Rep. 2021;11(1):5193. [FREE Full text] [CrossRef] [Medline]
- Retirement and Retirement Intentions, Australia. Australian Bureau of Statistics. URL: https://www.abs.gov.au/statistics/labour/employment-and-unemployment/retirement-and-retirement-intentions-australia/latest-release [accessed 2023-12-21]
- A National Policy Roadmap for Artificial Intelligence in Healthcare. North Ryde. Australian Alliance for Artifical Intelligence in Healthcare; 2023.
- Butler-Henderson K, Dalton L, Probst Y, Maunder K, Merolli M. A meta-synthesis of competency standards suggest allied health are not preparing for a digital health future. Int J Med Inform. 2020;144:104296. [CrossRef] [Medline]
- Lottonen T, Kaihlanen A, Nadav J, Hilama P, Heponiemi T. Nurses' and physicians' perceptions of the impact of eHealth and information systems on the roles of health care professionals: a qualitative descriptive study. Health Informatics J. 2024;30(1):14604582241234261. [FREE Full text] [CrossRef] [Medline]
- Kaihlanen A, Laukka E, Nadav J, Närvänen J, Saukkonen P, Koivisto J, et al. The effects of digitalisation on health and social care work: a qualitative descriptive study of the perceptions of professionals and managers. BMC Health Serv Res. 2023;23(1):714. [FREE Full text] [CrossRef] [Medline]
- Sit C, Srinivasan R, Amlani A, Muthuswamy K, Azam A, Monzon L, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights Imaging. 2020;11(1):14. [FREE Full text] [CrossRef] [Medline]
- Hogg HDJ, Al-Zubaidy M, Technology Enhanced Macular Services Study Reference Group, Talks J, Denniston AK, Kelly CJ, et al. Stakeholder perspectives of clinical artificial intelligence implementation: systematic review of qualitative evidence. J Med Internet Res. 2023;25:e39742. [FREE Full text] [CrossRef] [Medline]
- Pinto Dos Santos D, Giese D, Brodehl S, Chon SH, Staab W, Kleinert R, et al. Medical students' attitude towards artificial intelligence: a multicentre survey. Eur Radiol. 2019;29(4):1640-1646. [CrossRef] [Medline]
- Rainey C, O'Regan T, Matthew J, Skelton E, Woznitza N, Chu K, et al. An insight into the current perceptions of UK radiographers on the future impact of AI on the profession: A cross-sectional survey. J Med Imaging Radiat Sci. 2022;53(3):347-361. [FREE Full text] [CrossRef] [Medline]
- Swan BA. Assessing the knowledge and attitudes of registered nurses about artificial intelligence in nursing and health care. Nursing Economic$. 2021;39(3):139-143.
- Yüzbaşıoğlu E. Attitudes and perceptions of dental students towards artificial intelligence. J Dent Educ. 2021;85(1):60-68. [CrossRef] [Medline]
- Holzner D, Apfelbacher T, Rödle W, Schüttler C, Prokosch H-U, Mikolajczyk R, et al. Attitudes and acceptance towards artificial intelligence in medical care. Stud Health Technol Inform. 2022;294:68-72. [CrossRef] [Medline]
- Shelmerdine SC, Rosendahl K, Arthurs OJ. Artificial intelligence in paediatric radiology: international survey of health care professionals' opinions. Pediatr Radiol. 2022;52(1):30-41. [CrossRef] [Medline]
- Ziebland S, Hyde E, Powell J. Power, paradox and pessimism: on the unintended consequences of digital health technologies in primary care. Soc Sci Med. 2021;289:114419. [CrossRef] [Medline]
- Woods L, Janssen A, Robertson S, Morgan C, Butler-Henderson K, Burton-Jones A, et al. The typing is on the wall: Australia's healthcare future needs a digitally capable workforce. Aust Health Rev. 2023;47(5):553-558. [CrossRef] [Medline]
Abbreviations
AAMLS: approximate adjusted mean Likert Scale |
AH: allied health |
AHP: allied health professional |
AI: artificial intelligence |
AMFS: adjusted mean factor score |
CFA: confirmatory factor analysis |
CFI: comparative fit index |
CVSR: critical value of studentized range |
GCHHS: Gold Coast Hospital and Health Service |
RMSEA: root-mean-square error of approximation |
SHAIP: Shinners Artificial Intelligence Perception |
SRMR: standardized root-mean-squared residual |
STROBE: Strengthening the Reporting of Observational Studies in Epidemiology |
TLI: Tucker-Lewis index |
Edited by A Mavragani; submitted 13.02.24; peer-reviewed by H Mozaffar, J Thrall; comments to author 06.04.24; revised version received 26.06.24; accepted 19.09.24; published 30.12.24.
Copyright©Jane Hoffman, Laetitia Hattingh, Lucy Shinners, Rebecca L Angus, Brent Richards, Ian Hughes, Rachel Wenke. Originally published in JMIR Formative Research (https://formative.jmir.org), 30.12.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.