Published on in Vol 8 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/53918, first published .
Chinese Oncologists’ Perspectives on Integrating AI into Clinical Practice: Cross-Sectional Survey Study

Chinese Oncologists’ Perspectives on Integrating AI into Clinical Practice: Cross-Sectional Survey Study

Chinese Oncologists’ Perspectives on Integrating AI into Clinical Practice: Cross-Sectional Survey Study

Authors of this article:

Ming Li1 Author Orcid Image ;   XiaoMin Xiong2 Author Orcid Image ;   Bo Xu2, 3 Author Orcid Image ;   Conan Dickson1 Author Orcid Image

Original Paper

1Department of Health Policy Management, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, United States

2Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing University School of Medicine, Chongqing, China

3Department of Biochemistry and Molecular Biology, Key Laboratory of Breast Cancer Prevention and Therapy, Ministry of Education, National Cancer Research Center, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China

*all authors contributed equally

Corresponding Author:

Conan Dickson, DrPH

Department of Health Policy Management

Bloomberg School of Public Health

Johns Hopkins University

615 North Wolfe Street

Baltimore, MD, 21205

United States

Phone: 1 410 955 3543

Email: cdickso1@jh.edu


Background: The rapid development of artificial intelligence (AI) has brought significant interest to its potential applications in oncology. Although AI-powered tools are already being implemented in some Chinese hospitals, their integration into clinical practice raises several concerns for Chinese oncologists.

Objective: This study aims to explore the concerns of Chinese oncologists regarding the integration of AI into clinical practice and to identify the factors influencing these concerns.

Methods: A total of 228 Chinese oncologists participated in a cross-sectional web-based survey from April to June in 2023 in mainland China. The survey gauged their worries about AI with multiple-choice questions. The survey evaluated their views on the statements of “The impact of AI on the doctor-patient relationship” and “AI will replace doctors.” The data were analyzed using descriptive statistics, and variate analyses were used to find correlations between the oncologists’ backgrounds and their concerns.

Results: The study revealed that the most prominent concerns were the potential for AI to mislead diagnosis and treatment (163/228, 71.5%); an overreliance on AI (162/228, 71%); data and algorithm bias (123/228, 54%); issues with data security and patient privacy (123/228, 54%); and a lag in the adaptation of laws, regulations, and policies in keeping up with AI’s development (115/228, 50.4%). Oncologists with a bachelor’s degree expressed heightened concerns related to data and algorithm bias (34/49, 69%; P=.03) and the lagging nature of legal, regulatory, and policy issues (32/49, 65%; P=.046). Regarding AI’s impact on doctor-patient relationships, 53.1% (121/228) saw a positive impact, whereas 35.5% (81/228) found it difficult to judge, 9.2% (21/228) feared increased disputes, and 2.2% (5/228) believed that there is no impact. Although sex differences were not significant (P=.08), perceptions varied—male oncologists tended to be more positive than female oncologists (74/135, 54.8% vs 47/93, 50%). Oncologists with a bachelor’s degree (26/49, 53%; P=.03) and experienced clinicians (≥21 years; 28/56, 50%; P=.054). found it the hardest to judge. Those with IT experience were significantly more positive (25/35, 71%) than those without (96/193, 49.7%; P=.02). Opinions regarding the possibility of AI replacing doctors were diverse, with 23.2% (53/228) strongly disagreeing, 14% (32/228) disagreeing, 29.8% (68/228) being neutral, 16.2% (37/228) agreeing, and 16.7% (38/228) strongly agreeing. There were no significant correlations with demographic and professional factors (all P>.05).

Conclusions: Addressing oncologists’ concerns about AI requires collaborative efforts from policy makers, developers, health care professionals, and legal experts. Emphasizing transparency, human-centered design, bias mitigation, and education about AI’s potential and limitations is crucial. Through close collaboration and a multidisciplinary strategy, AI can be effectively integrated into oncology, balancing benefits with ethical considerations and enhancing patient care.

JMIR Form Res 2024;8:e53918

doi:10.2196/53918

Keywords



Artificial intelligence (AI) has made substantial strides within the health care sector, effecting profound transformations in various fields including medicine, radiology, dermatology, ophthalmology, and pathology. The potential of AI to reform physicians’ clinical practices is significant [1].

AI unveils a plethora of opportunities within health care, demonstrating capabilities to augment a host of medical processes—from disease diagnostics and chronic disease management to clinical decision-making. With AI becoming increasingly ubiquitous, its utility in enhancing the accuracy and efficiency of clinical practice across a multitude of specializations is clear [2]. Particularly in the field of oncology, AI is revolutionizing practice paradigms, offering crucial advancements in the management of patients with cancer [3].

The proliferation of data and advances in computational algorithms have positioned AI to ameliorate clinical oncology via rigorously evaluated, narrow-task applications interacting at specific touch points along the cancer care path [4]. This, in turn, has expedited progress in oncology research, enhancing cancer diagnosis and treatment.

The concept of intelligent oncology was introduced as an emerging field that integrates various disciplines including oncology, radiology, pathology, molecular biology, multiomics, and computer science. This integration aims to leverage data and computational methods to improve cancer care and outcomes [5].

In China, approximately 32.86% of hospitals have adopted 1 or more AI products, with all university hospitals having integrated AI technologies [6]. These technologies are primarily used in imaging AI and clinical decision support systems across breast cancer, bone tumors, urological tumors, and many other types of cancer [7-12].

The AI Decision System was established under the Chinese Society of Clinical Oncology platform using databases, guidelines, and technologies. The main goal of the system is to provide patients with breast cancer with more accurate and individualized medical decisions, and the system has been validated effectively in clinical trials and implemented in many hospitals in China [13].

Differences in viewpoints across various specialties and demographic groups could significantly influence the speed and effectiveness of AI adoption. Distinct perspectives based on sex and age have been observed regarding AI [14]. It is crucial to have a comprehensive understanding of these differences to ensure the priorities of all stakeholders are taken into account. In China, especially among oncologists, there is a lack of research exploring physicians’ attitudes toward AI. Given that physicians are the main users of AI technologies, their perspectives and concerns need to be meticulously addressed.

Although most physicians recognize the potential benefits of AI in health care, some maintain a cautious stance toward its adoption [15]. Interestingly, about 41% of physicians find themselves equally excited and concerned about the possibilities that AI presents in the health care sector [16]. The effect of AI on patient outcomes is still uncertain. Significant obstacles to the adoption of AI in this field include issues with biased and heterogeneous data, challenges in data management, and others [1].

The primary aim of this study is to delineate oncologists’ concerns surrounding AI. Understanding these concerns can inform strategies to foster AI acceptance and adoption in clinical practice, thereby optimizing patient outcomes. The insights derived from this study can provide valuable guidance to policy makers and regulatory bodies, assisting in comprehending AI’s current use, gauging its impact, identifying potential risks, and determining requisite regulations to ensure ethical and effective AI use. Moreover, these insights can aid AI firms in fine-tuning their products to better align with physicians’ needs, thereby increasing the practicality and utility of AI tools in clinical practice [17].


Study Design

The development of this questionnaire (Multimedia Appendix 1) was grounded in an extensive literature review, complemented by interviews with 11 oncologists. Before the survey’s deployment, these oncologists, who are specialists in various domains of cancer treatment such as medical, surgical, and radiation oncology and have experience with AI technologies in contexts such as medical imaging analysis and treatment recommendations, provided valuable feedback.

The inclusion criteria for participation in the study were limited to licensed oncologists currently practicing in Chinese hospitals and actively treating patients with cancer. The exclusion criteria ruled out general practitioners, general surgeons, medical residents, students, and other health care professionals such as nurses or technicians. Only attending physicians specializing in oncology who are seeing patients in Chinese hospitals were eligible to participate.

Using WeChat (Tencent), a popular communication tool in China, for survey distribution ensured a streamlined and effective process for collecting data. The survey was conducted from April 4 to June 30, 2023, and was distributed across the country by the Chinese Anti-Cancer Association.

The questionnaire, presented in Chinese, was structured around 4 main components. The first section concentrated on the oncologists’ characteristics. The second section encompassed questions pertaining to their knowledge and perception of AI. The third section probed the promoting factors for the use of AI. The final section aimed to explore their concerns regarding AI. All question items were mandatory; otherwise, a response cannot be submitted successfully, which ensured that no data were lost or missing.

The main objective of the study was to explore into oncologists’ concern on AI. The survey was anchored by 3 principal questions, 1 of which focused on concerns about AI, providing 10 options for multiple-choice answers along with a free-text option for detailed responses. The impact of AI on the doctor-patient relationship was assessed through 4 predefined options: positive, negative, no impact, and hard to judge. Furthermore, the survey measured participants’ views on the assertion that “AI will replace doctors” using a 5-point Likert scale.

Ethical Considerations

The ChongQing University Cancer Hospital’s Institutional Review Board approved the study (CZLS2022244-A). An electronic consent form was presented on the initial page of the questionnaire. Only participants who agreed to this consent form could continue to answer the questionnaire. Participation in this survey was entirely voluntary and anonymous, and data were deidentified. No compensation was provided for participation.

Data Analysis

Descriptive statistics were used to summarize the survey findings, notably the ranking of oncologists’ concerns. The association between physicians’ characteristics and their AI-related concerns was evaluated using the χ2 test. To scrutinize the variation in responses, factors such as the oncologists’ sex, education, and years of experience were examined using the Pearson χ2 test. The statistical significance of the analysis was ascertained using 2-sided testing with an α level of 5%.

A P value of less than .05 was considered to be indicative of statistical significance. All data analyses were performed using SPSS software (version 22.0; IBM Corp).


Oncologists’ Characteristics

Our study involved a sample of 228 oncologists. The majority were male (n=135, 59.2%), whereas female participants constituted 40.8% (n=93). The largest age group was 31-40 years (n=95, 41.7%), followed by 41-50 years (n=80, 35.1%), younger than 30 years (n=28, 12.3%), and 51-60 years (n=25, 11%). Regarding years of clinical practice, the most represented group had 11-20 years of experience (n=126, 55.3%), compared to those with 0-10 years (n=49, 21.5%) and over 20 years (n=53, 23.2%). In terms of education, the largest proportion held a bachelor’s degree (n=89, 39%), with fewer having master’s (n=83, 36.4%) or doctoral degrees (n=56, 24.6%). Medical oncology was the most common specialty (n=97, 42.5%), followed by surgical oncology (n=77, 33.8%), radiation therapy (n=40, 17.5%), and other specialties (n=14, 6.1%) such as Chinese traditional medicine oncologists and gynecologic oncologists. Most oncologists worked in university hospitals (n=148, 64.9%), whereas others worked in nonuniversity hospitals (n=80, 35.1%). Experience with IT projects was limited, with only 15.4% (n=35) having such experience, compared to 84.6% (n=193) without (Table 1).

Table 1. Oncologists’ characteristics (N=228).
CharacteristicsOncologists, n (%)
Sex

Male135 (59.2)

Female93 (40.8)
Years of clinical practice

0-1049 (21.5)

11-20126 (55.3)

≥2153 (23.2)
Education degree

Bachelor’s89 (39)

Master’s83 (36.4)

Doctoral56 (24.6)
Specialty

Medical oncology97 (42.5)

Surgical oncology77 (33.8)

Radiation therapy40 (17.5)

Others14 (6.1)
Hospital type

University hospital148 (64.1)

Nonuniversity hospital80 (35.1)
Experience with IT projects

Yes35 (15.4)

No193 (84.6)

Oncologists’ Concern About AI

Respondents expressed their level of concern regarding aspects of implementing AI in health care, with key findings summarized based on the analysis of recurrent themes in their selection of responses from the available multiple-choice options (Table 2).

From the 228 respondents, the most prominent concern related to the risk that AI could mislead physicians’ diagnosis and treatment, causing medical errors and impacting patient safety (n=163, 71.5%), followed closely by the potential decrease in physicians’ diagnostic and therapeutic capabilities due to an overreliance on AI (n=162, 71%).

Concerns about data bias and inapplicability of AI to actual clinical situations were expressed by 54% (n=123) of the respondents, tying with worries about data security and patient privacy. Legal and regulatory lagging were prominent issues for 50.4% (n=115) of the respondents.

The “black box” phenomenon and lack of trust were cited as problems by 39.5% (n=90) of the respondents. The lack of empathy in AI, demonstrating a deficiency in humanlike emotions, was a concern for 36.8% (n=84) of the physicians. Issues related to the pricing of AI products and their impact on widespread use were pointed out by 25.4% (n=58) of respondents, and 16.7% (n=38) found the operation of AI products complex and not well integrated within existing workflow.

Interestingly, only 3.1% (n=7) of the physicians felt no concern associated with the application of AI, and a very small percentage (n=2, 0.9%) marked “other” concerns.

Supplementary analyses were executed to discern potential variations in AI concerns, based on physicians’ demographic and professional traits. These analyses considered the sex, age, education level, years of clinical practice, area of specialty, hospital type, and IT experience of the participating clinicians (Tables 3 and 4).

As showed in first rows of Tables 3 and 4, regarding sex, our data revealed no statistically significant differences between male and female oncologists in their perceptions of AI in the health care context (all P>.05). Overall, 71.8% (95/135) of male physicians and 71% (66/93) of female physicians were concerned about AI misleading diagnosis and treatment, whereas the concerns about an overreliance on AI were shared by 69.6% (94/135) of male physicians and 73% (68/93) of female physicians.

When examining education level, oncologists holding a bachelor’s degree were more likely to be concerned about data and algorithm bias (34/49, 69%; P=.03) and laws, regulation, and policies lagging (32/49, 65%; P=.046).

Considering the years of clinical practice, oncologists with 0-10 years of experience exhibited less concern about laws, regulations, and policies lagging behind (37/89, 42%; P=.047).

Regarding the clinician’s area of specialty, no significant differences were detected in their concerns about AI (all P>.05).

In terms of hospital type, there was a trend toward a greater concern about the business model issue of AI services among clinicians working at university hospitals, although this did not reach statistical significance (17/58, 29%; P=.09).

Lastly, with regard to IT experience, clinicians with such experience were found to express significantly less concern on an overreliance on AI (19/35, 54%; P=.02) and lower levels of empathy (12/84, 14%; P=.003) in comparison to those without IT experience.

Table 2. Oncologists’ concerns about AIa (N=228).
ConcernsOncologists, n (%)
AI misleads diagnosis and treatment163 (71.5)
Overreliance on AI162 (71)
Data and algorithm bias123 (54)
Data security and patient privacy issues123 (54)
Laws, regulation, and policies lagging115 (50.4)
“Black box” phenomenon90 (39.5)
AI has no empathy and lacks human emotions84 (36.8)
Business model issues affect AI promotion58 (25.4)
Not easy to use and not well integrated with clinical workflow38 (16.7)
No concern7 (3.1)
Others2 (0.9)

aAI: artificial intelligence.

Table 3. Oncologists’ characteristics in relation to their concerns about AIa (part 1).
CharacteristicsTotal oncologists, NAI misleads diagnosis and treatmentOverreliance on AIData and algorithm bias



Oncologists, n (%)bChi-square (df)P valueOncologists, n (%)Chi-square (df)P valueOncologists, n (%)Chi-square (df)P value
Sex0.021 (1).88
0.326 (1).57
0.05 (1).82

Male13597 (71.8)

94 (69.6)

72 (53.3)


Female9366 (71)

68 (73.1)

51 (54.8)

Education degree1.827 (2).40
1.665 (2).44
6.746 (2).03c

Bachelor’s4936 (73.5)

38 (77.6)

34 (69.4)


Master’s12693 (73.8)

89 (70.6)

60 (47.6)


Doctoral5334 (64.2)

35 (66)

29 (54.7)

Years of clinical practice5.187 (2).08
3.556 (2).17
2.634 (2).27

0-108967 (75.3)

57 (64)

50 (56.2)


11-208352 (62.6)

62 (74.7)

48 (57.8)


≥215644 (78.6)

43 (76.8)

25 (44.6)

Specialty1.769 (3).62
1.242 (3).74
3.6 (3).31

Medical oncology9767 (69.1)

66 (68)

56 (57.7)


Surgical oncology7754 (70.1)

55 (71.4)

35 (45.4)


Radiation therapy4032 (80)

31 (77.5)

23 (57.5)


Others1410 (71.4)

10 (71.4)

9 (64.3)

Hospital type0.563 (1).90
0.756 (1).38
1.34 (1).25

University hospital14841 (76.5)

108 (73)

84 (56.8)


Nonuniversity hospital8056 (69.7)

54 (67.5)

39 (48.8)

IT experience0.158 (1).69
5.651 (1).02c
1.321 (1).25

Yes3526 (74.3)

19 (54.3)

22 (62.9)


No193137 (71)

143 (74.1)

101 (52.3)

aAI: artificial intelligence.

bPercentages expressed with the value in the “Total oncologists, N” column as the denominator.

cP<.05.

Table 4. Oncologists’ characteristics in relation to their concerns about AIa (part 2).
CharacteristicsTotal oncologists, NData security and patient privacy issuesLaws, regulation, and policies lagging“Black box” phenomenon



Oncologists, n (%)bChi-square (df)P valueOncologists, n (%)Chi-square (df)P valueOncologists, n (%)Chi-square (df)P value
Sex0.1 (1).75
0.614 (1).43
0.326 (1).57

Male13574 (54.8)

71 (52.6)

94 (69.6)


Female9349 (52.7)

44 (47.3)

68 (73.1)

Education degree0.634 (2).73
6.149 (2).046c
0.106 (2).95

Bachelor’s4928 (57.1)

32 (65.3)

20 (40.8)


Master’s12665 (51.6)

56 (44.4)

50 (39.7)


Doctoral5330 (56.6)

27 (50.9)

20 (37.7)

Years of clinical practice0.772 (2).68
6.119 (2).047c
0.635 (2).73

0-108946 (51.7)

37 (41.6)

38 (37.4)


11-208344 (53)

43 (51.8)

31 (37.5)


≥215633 (58.9)

35 (62.5)

21 (37.5)

Specialty1.695 (3).64
4.785 (3).19
0.928 (3).82

Medical oncology9748 (49.5)

44 (45.4)

39 (40.2)


Surgical oncology7745 (58.4)

41 (53.2)

28 (36.4)


Radiation therapy4023 (57.5)

25 (62.5)

18 (45)


Others147 (50)

5 (35.7)

5 (35.7)

Hospital type0.563 (1).90
0.141 (1).71
1.69 (1).19

University hospital14879 (53.4)

76 (51.4)

63 (42.6)


Nonuniversity hospital8044 (55)

39 (48.8)

27 (33.8)

IT experience0.158 (1).69
0.369 (1).54
1.12 (1).29

Yes3516 (45.7)

16 (45.7)

11 (31.4)


No193107 (55.4)

99 (51.3)

79 (40.9)

aAI: artificial intelligence.

bPercentages expressed with the value in the “Total oncologists, N” column as the denominator.

cP<.05.

Oncologists’ View on “The Impact of AI on the Doctor-Patient Relationship”

As for the impact of AI on doctor-patient relationships, a majority (121/228, 53.1%) believed that AI would have a positive impact on the doctor-patient relationship. However, 9.2% (21/228) of the respondents felt that AI could cause trouble and increase disputes between doctors and patients, whereas 2.2% (5/228) believed it would not have any impact. In all, 35.5% (81/228) of the respondents reported that it was hard to judge, meaning they had mixed feelings about the statement (Table 5).

The study revealed that perceptions of AI’s impact on the doctor-patient relationship varied with sex, education, and clinical experience.

Regarding sex, female physicians tended to find it harder to judge than male physicians (39/93, 42% vs 42/135, 31.1%), whereas male physicians were more positive than female physicians (74/135, 54.8% vs 47/93, 50%). However, the difference in proportions was not statistically significant (P=.08). Education degree appeared to influence the responses. Those with a bachelor’s degree showed the highest difficulty in making a judgment (26/49, 53%), and the difference in response according to education level was statistically significant (P=.03). When analyzing the years of clinical practice, practitioners with 21 or more years of experience had the highest difficulty in making a judgment (28/56, 50%). However, this difference was not statistically significant (P=.054). Regarding specialties, there were no significant differences among the responses (P=.15). The hospital type did not show any significant differences either (P=.42). Lastly, IT experience played a significant role in judgment, with those having IT experience showing more positive responses (25/35, 71%) compared to those without IT experience (96/193, 49.7%). This difference was statistically significant (P=.02; Table 6).

Table 5. Oncologists’ view on the statement “the impact of AI on the doctor-patient relationship” (N=228).
ResponseOncologists, n (%)
Positive121 (53.1)
Negative21 (9.2)
No impact5 (2.2)
Hard to judge81 (35.5)
Table 6. Oncologists’ characteristics in relation to the statement “the impact of AIa on the doctor-patient relationship.”
CharacteristicsTotal oncologists, NHard to judge, n (%)bPositive, n (%)Negative, n (%)No impact, n (%)Chi-square (df)P value
Sex6.88 (3).08

Male13542 (31.1)74 (54.8)17 (12.6)2 (1.5)


Female9339 (41.9)47 (50.5)4 (4.3)3 (3.2)

Education degree13.829 (6).03c

Bachelor’s4926 (53.1)18 (36.7)5 (10.2)0 (0)


Master’s12641 (32.5)70 (55.6)10 (7.9)5 (4)


Doctoral5314 (26.4)33 (62.3)6 (11.3)0 (0)

Years of clinical practice12.378 (9).054

0-108928 (31.5)55 (61.8)4 (4.5)2 (2.4)


11-208325 (30.1)44 (53)12 (14.5)2 (2.2)


≥215628 (50)22 (39.3)5 (8.9)1 (1.8)

Specialty13.323 (6).15

Medical oncology9733 (34)56 (57.7)5 (5.2)3 (3.1)


Surgical oncology7726 (33.8)38 (49.4)13 (16.9)0 (0)


Radiation therapy4018 (45)19 (47.5)2 (5)1 (2.5)


Others144 (28.6)8 (57.1)1 (7.1)1 (7.1)

Hospital type2.796 (3).42

University hospital14850 (33.8)78 (52.7)17 (11.5)3 (2)


Nonuniversity hospital8031 (38.8)43 (53.8)4 (5)2 (2.5)

IT experience10.233 (3).02c

Yes355 (14.3)25 (71.4)3 (8.6)2 (5.7)


No19376 (39.4)96 (49.7)18 (9.3)3 (1.6)

aAI: artificial intelligence.

bPercentages expressed with the value in the “Total oncologists, N” column as the denominator.

cP<.05.

Oncologists’ View on “AI Will Replace Doctors”

In terms of acceptance of the statement “AI will replace doctors,” the result indicated mixed opinions. Overall, 23.2% (53/228) strongly disagreed with the statement, 14% (32/228) disagreed, 29.8% (68/228) were neutral, 16.2% (37/228) agreed, and 16.7% (38/228) strongly agreed (Table 7).

The study revealed a diversity of views on AI’s potential to replace doctors, but there were no significant correlations with demographic and professional factors (all P>.05).

Table 7. Oncologists’ view on the statement “AIa will replace doctors” (N=228).
ResponseOncologists, n (%)
Strongly disagree53 (23.2)
Disagree32 (14)
Neutral68 (29.8)
Agree37 (16.2)
Strongly agree38 (16.7)

aAI: artificial intelligence.


Principal Findings

Our survey delved into the many concerns held by oncologists regarding the integration of AI in their respective disciplines. The data procured elucidate an array of apprehensions that can vary significantly in both content and priority, as dictated by multiple factors. These factors are integral to the comprehension and smooth transition of AI adoption within health care. Primary among these apprehensions are misleading diagnoses and treatments by AI, an overreliance on AI potentially diminishing doctors’ capabilities, bias in algorithms and data, issues pertaining to data security and patient privacy, and legal challenges. These issues emerged as the top 5 concerns in this study.

A total of 71.5% (163/228) of respondents expressed anxiety over AI misleading diagnoses and treatments, potentially leading to medical errors and compromising patient safety—a common concern among many physicians [18-20]. Several reasons can underpin this concern. First, AI systems are reliant on training data; biased data lead to a biased AI, potentially resulting in improper diagnosis or treatment recommendations [21]. Second, an AI’s predictions are restricted to its training data. Consequently, incomplete data could lead to inaccurate predictions [22]. Third, the complex and opaque nature of AI systems can hinder users’ comprehension of their conclusions or suggestions, leading to decisions based on incomplete or inaccurate information, thereby potentially harming patients [20]. Fourth, misuse by health care providers lacking appropriate training on AI could lead to suboptimal patient outcomes [23].

Another concern, held by 71% (162/228) of respondents, revolved around an overreliance on AI, leading to a decrease in their diagnostic and treatment capabilities. This aligns with literature emphasizing the necessity of a balanced approach toward incorporating AI into health care, where AI serves as an auxiliary tool, not a replacement for health care professionals [24]. Overreliance can potentially erode critical skills acquired through education, training, and experience [25], as well as lead to complacency and blind trust in AI’s decision-making, consequently compromising patient safety [26].

A majority (123/228, 54%) of respondents expressed concern over data security and patient privacy, resonating with recent studies highlighting similar apprehensions in the era of AI [27,28]. AI systems’ tendency to collect and process vast amounts of patient data makes them an attractive target for hackers, with potential fallout including identity theft, financial loss, reputational damage, and loss of trust [29,30]. This underscores the necessity of robust data protection measures.

Another issue, raised by 54% (123/228) of the participants, pertained to bias in AI’s data and algorithms. These biases can significantly impact health AI, potentially leading to inaccurate diagnoses, missed treatments, and negative outcomes for patients. These biases might also exacerbate existing inequalities [31,32].

Legal ambiguities surrounding AI use were a concern for half (115/228, 50.4%) of the respondents, with an unclear delineation of medical responsibilities posing potential risks. The laws, regulations, and policies governing AI in health care are still evolving, creating uncertainty for health care organizations and providers [33,34]. The European Commission has put forward new regulatory measures for the deployment of “high-risk artificial intelligence,” indicating that the current framework of European fundamental rights already lays down explicit directives for using medical AI, under the title “Fundamental Rights as Legal Guidelines for Medical AI.” Within this context, “obligations to protect” gain significant relevance in the medical field, mandating health care service providers to adopt quality-assurance practices [35]. However, the swift and expansive progression of AI technology and innovations significantly amplifies the threats associated with the underuse of AI. Overregulation threatens to forgo the potential benefits of AI [36].

As for the “black box” phenomenon of AI products, it was a concern for 39.5% (90/228) of participants, mirroring the general call for more explainable and interpretable AI models in the literature [37]. Trustworthy AI must enable professionals to confidently assume responsibility for their decisions, thereby emphasizing the importance of explainable AI techniques. The analytical and clinical effectiveness of AI algorithms requires consistent monitoring. For effective oversight, both explainability and causality must be evident. Experts require proof of explainability and causality to responsibly manage their roles. Therefore, AI must integrate causality assessments to uphold the standard of its explainability [38].

The fact that AI lacks empathy, as noted by 36.8% (84/228) of respondents, is a recurring theme in AI ethics discussions, underlining the irreplaceability of human touch in medical care [39]. The complex operation of AI products (38/228, 16.7%), business model issue (58/228, 25.4%), and poor integration with existing workflows highlight a need for more user-friendly AI solutions that integrate well with health care systems [40]. Interestingly, a small proportion (7/228, 3.1%) of respondents did not perceive any risks associated with AI. The variability in perceptions could be due to differences in understanding, knowledge, and exposure to AI among the respondents [41].

Concerns regarding AI are shaped by a complex blend of demographic, professional, and regional variables. The apprehensions of physicians in their later career stages might be influenced by their technological fluency and privacy concerns [1]. Early-career doctors, who might be more familiar with digital technology, may be more accepting of AI than late-career doctors, who might prefer traditional methods, and showed less concern about AI. Sex differences were noticeable, with female physicians often expressing ethical concerns, whereas male physicians focused on the potential applications and benefits of AI [42]. Higher education can provide knowledge, critical thinking skills, technology exposure, and learning confidence that make individuals more receptive to emerging technologies such as AI, leading to greater trust and acceptance and thus less concern about AI [43,44]. This allows individuals to evaluate the potential risks and benefits of AI more effectively, rather than simply fearing the unknown. However, there are likely other mediating factors, and more research is needed. In our study, we did not find a significant difference in medical specialty, geographic practice location, professional experience, and cultural background that also significantly influenced doctors’ concerns [45]. Acknowledging these intricacies is essential for effective and empathetic AI integration.

Our research also suggests that IT experience makes a difference. Oncologists with IT experience might have a better understanding of the capabilities and limitations of AI, making them more confident in integrating it into their practice. On the other hand, those without IT experience may have apprehensions due to unfamiliarity with the technology.

AI introduces novel challenges for the doctor-patient relationship, as it carries the potential to revolutionize modes of clinical interaction. Consequently, the doctor-patient relationship may evolve from a dyad to a triad, encompassing the doctor, patient, and AI [46]. AI’s role in medicine can instigate a positive shift in the patient-physician relationship. It has been indicated that AI can positively impact doctor-patient relationships, particularly by serving in an assistive role and enhancing medical education [47]. However, its impact on clinical practice and the doctor-patient relationship remains largely undetermined. The effect is likely to vary based on AI’s specific application and use context. AI might also result in a lower standard of care, characterized by fewer personal, face-to-face interactions [48].

Most oncologists surveyed recognize AI’s potential to positively influence the doctor-patient relationship, especially in terms of enhancing patient understanding. Oncologists with higher educational degrees and IT experience tended to have a more positive view of AI. There was also a slight sex difference, with male oncologists appearing slightly more positive toward AI’s impact. However, apprehensions still existed, and these appeared to be influenced by factors such as sex, educational background, and years of clinical practice. This highlights the necessity for nuanced, demographic-specific strategies when incorporating AI into health care practices, to address diverse concerns and expectations.

The idea of AI replacing doctors has been the subject of numerous discussions, studies, and debates in recent years. Based on the study data, the conclusion is that oncologists showed mixed responses toward the statement “AI will replace doctors.” Overall, most oncologists, regardless of sex, age, education degree, years of clinical practice, specialty, type of hospital, or IT experience, tended to be neutral on the question of AI replacing doctors. There were no statistically significant differences in views based on the analyzed factors, suggesting that other factors not captured in this study might be influential, or that views on AI’s capacity to replace doctors were generally ambivalent or uncertain within this professional group. Some scholars and practitioners argue that AI has the potential to outperform humans in some areas of medicine, particularly in tasks involving data analysis and interpretation, such as radiology, pathology, and genomics [49-51]. Researchers argue that AI currently lacks the generalized intelligence, emotional skills, reasoning capacity, and societal trust needed to fully replace human physicians [52-56].

Despite the differing views, it is apparent that the medical community is not widely endorsing the notion of AI replacing doctors as of now. It is generally agreed upon in the medical community that AI should be used as a tool to assist health care professionals and work in collaboration, rather than replace them [24]. It reflects a pragmatic approach, recognizing the potential of AI in enhancing health care delivery while valuing the irreplaceable aspects of human medical practice.

Suggestion

To effectively address the concerns raised by oncologists about the use of AI in health care, it is essential for AI stakeholders, designers, and researchers to focus on a comprehensive strategy encompassing critical actions.

Educating health care professionals about AI’s capabilities and limitations is vital to prevent overreliance and foster a balanced approach where AI acts as a supportive tool rather than a replacement. Education was identified as a priority to prepare clinicians for the implementation of AI in health care [45].

Encouraging multidisciplinary collaboration among AI researchers, health care professionals, ethicists, and policy makers can address the complex challenges of AI integration, ensuring responsible and ethical development and deployment [57].

The AI vender must prioritize transparency and explainability of AI systems to demystify their operations for clinicians, thereby tackling the “black box” issue [58].

Emphasizing human-centered design and empathy is crucial; AI tools should be developed with health care professionals’ involvement to ensure that they address clinical needs and seamlessly fit into existing workflows, thus enhancing user experience and bridging the emotional gap between AI and humans. Addressing bias through rigorous testing and validation across diverse data sets is essential to prevent perpetuating existing inequalities and ensure that AI applications are equitable [59].

By concentrating on these targeted actions, AI stakeholders can substantially contribute to the responsible and effective integration of AI in oncology, ultimately enhancing patient outcomes and fostering trust in AI-assisted health care.

Limitations

This study has several limitations. First, this study is constrained by its small sample size, which diminishes its statistical power and heightens the potential for error. A future study will aim to expand participant recruitment and increase the sample size to mitigate this issue. Second, individuals presented with a survey concerning a topic they find engaging are more likely to participate compared to those who perceive the topic as less interesting [60]. Third, we referred to a validated questionnaire that was adapted from many studies, which was developed by doctors rather than AI experts. Some items were not considered due to the experts’ suggestion. Thus, our study only provides information on physicians’ concern of AI.

Conclusion

In conclusion, this study has highlighted the primary concerns of oncologists regarding AI, underscoring significant implications for stakeholders in the health care sector. To successfully integrate AI into health care, it is imperative to address these concerns through a unified effort involving policy makers, AI developers, health care professionals, and legal experts. A comprehensive strategy, encompassing transparent and understandable AI systems and human-centered design, addressing biases, and educating health care providers on AI’s capabilities and limitations, is essential. Such a collaborative and multidisciplinary approach will pave the way for AI to become a valuable ally in health care, thus enhancing patient care and outcomes.

Acknowledgments

The authors thank the China Anti-Cancer Association for helping to publish the survey questionnaire. BX and CD are co-corresponding authors.

Data Availability

The data sets generated during and/or analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

ML was responsible for conceptualization, data curation, data analysis, visualization, and writing—original draft. XX was responsible for conceptualization, methodology, and writing—original draft. CD was responsible for conceptualization and methodology. BX was responsible for conceptualization, methodology, writing—review and editing, and supervision. All authors have read and approved the final manuscript. ML and XX are co-first authors.

Conflicts of Interest

None declared.

Multimedia Appendix 1

The questionnaire of this study.

DOCX File , 19 KB

  1. Sarwar S, Dent A, Faust K, Richer M, Djuric U, van Ommeren R, et al. Physician perspectives on integration of artificial intelligence into diagnostic pathology. NPJ Digit Med. Apr 26, 2019;2:28. [FREE Full text] [CrossRef] [Medline]
  2. Ahuja AS. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ. Oct 4, 2019;7:e7702. [FREE Full text] [CrossRef] [Medline]
  3. Luchini C, Pea A, Scarpa A. Artificial intelligence in oncology: current applications and future perspectives. Br J Cancer. Jan 2022;126(1):4-9. [FREE Full text] [CrossRef] [Medline]
  4. Kann BH, Hosny A, Aerts HJWL. Artificial intelligence for clinical oncology. Cancer Cell. Jul 12, 2021;39(7):916-927. [FREE Full text] [CrossRef] [Medline]
  5. Lin B, Tan Z, Mo Y, Yang X, Liu Y, Xu B. Intelligent oncology: the convergence of artificial intelligence and oncology. J Natl Cancer Cent. Mar 2023;3(1):83-91. [CrossRef]
  6. Zhang X. Annual Report on Medical Artificial Intelligence in China (2020). Beijing, China. Social Sciences Academic Press; Sep 2020:109-113.
  7. Peng W, Gu Y, Gong J. Current status and prospects of artificial intelligence applications in breast tumor imaging [Article in Chinese]. Chinese Journal of Radiology. Feb 10, 2023;57(2):121-124. [CrossRef]
  8. Sun Y, Gong L, Liu W. Research progress of artificial intelligence in the diagnosis and treatment of bone tumors [Article in Chinese]. Chinese Journal of Orthopedics. Aug 1, 2023;43(15):1050-1056. [CrossRef]
  9. Xu W, Tian X, Aihe TA, Qu Y, Shi G, Zhang H, et al. Research progress on the application of artificial intelligence in urological tumors [Article in Chinese]. Chinese Journal of Cancer. Jan 27, 2022;32(1):68-74. [CrossRef]
  10. Yang L, Chen K, Chen D. Consistency study of artificial intelligence-assisted CT diagnosis of liver tumors and pathological biopsy puncture [Article in Chinese]. Journal of Integrated Traditional Chinese and Western Medicine for Liver Diseases. 2023;33(11):1022-1025. [CrossRef]
  11. Wang Y, Yu S, Pang M. Research progress on the application of artificial intelligence in the diagnosis and treatment of gastrointestinal tumors [Article in Chinese]. Journal of Practical Hospital Clinics. 2023;20(1):166-170. [CrossRef]
  12. Wang X, Pan W, Zhang Q. Progress in the application of artificial intelligence-assisted diagnosis of malignant tumors [Article in Chinese]. Cancer Research on Prevention and Treatment. Oct 24, 2020;47(10):788-792. [CrossRef]
  13. Li J, Jiang Z. Establishment and application of artificial intelligence decision systems by the Chinese Society of Clinical Oncology [Article in Chinese]. Chinese Medical Journal. Feb 18, 2020;100(6):411-415. [CrossRef]
  14. Hamedani Z, Moradi M, Kalroozi F, Manafi Anari A, Jalalifar E, Ansari A, et al. Evaluation of acceptance, attitude, and knowledge towards artificial intelligence and its application from the point of view of physicians and nurses: a provincial survey study in Iran: a cross-sectional descriptive-analytical study. Health Sci Rep. Sep 04, 2023;6(9):e1543. [FREE Full text] [CrossRef] [Medline]
  15. Mundell E. Doctors are excited, concerned about AI's role in medicine: poll. U.S. News. Dec 15, 2023. URL: https:/​/www.​usnews.com/​news/​health-news/​articles/​2023-12-15/​doctors-are-excited-concerned-about-ais-role-in-medicine-poll [accessed 2024-05-13]
  16. Chua IS, Gaziel-Yablowitz M, Korach ZT, Kehl KL, Levitan NA, Arriaga YE, et al. Artificial intelligence in oncology: path to implementation. Cancer Med. Jun 07, 2021;10(12):4138-4149. [FREE Full text] [CrossRef] [Medline]
  17. Reffien MAM, Selamat EM, Sobri HNM, Hanan MFM, Abas MI, Ishak MFM, et al. Physicians’ attitude towards artificial intelligence in medicine, their expectations and concerns: an online mobile survey. Malaysian Journal of Public Health Medicine. Apr 24, 2021;21(1):181-189. [CrossRef]
  18. Leenhardt R, Sainz IFU, Rondonotti E, Toth E, van de Bruaene C, Baltes P, et al. PEACE: perception and expectations toward artificial intelligence in capsule endoscopy. J Clin Med. Dec 06, 2021;10(23):5708. [FREE Full text] [CrossRef] [Medline]
  19. Doraiswamy PM, Blease C, Bodner K. Artificial intelligence and the future of psychiatry: insights from a global physician survey. Artif Intell Med. Jan 2020;102:101753. [CrossRef] [Medline]
  20. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. Jan 2019;25(1):44-56. [FREE Full text] [CrossRef] [Medline]
  21. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. Apr 04, 2019;380(14):1347-1358. [CrossRef] [Medline]
  22. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. Oct 25, 2019;366(6464):447-453. [CrossRef] [Medline]
  23. Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. Mar 2019;28(3):231-237. [FREE Full text] [CrossRef] [Medline]
  24. Verghese A, Shah NH, Harrington RA. What this computer needs is a physician: humanism and artificial intelligence. JAMA. Jan 02, 2018;319(1):19-20. [CrossRef] [Medline]
  25. Wartman S, Combs CD. Reimagining medical education in the age of AI. AMA J Ethics. Feb 01, 2019;21(2):E146-E152. [FREE Full text] [CrossRef] [Medline]
  26. Parasuraman R, Riley V. Humans and automation: use, misuse, disuse, abuse. Hum Factors. Nov 23, 2016;39(2):230-253. [CrossRef]
  27. Kluge EW. Artificial intelligence in healthcare: ethical considerations. Healthc Manage Forum. Jan 2020;33(1):47-49. [CrossRef] [Medline]
  28. Abouelmehdi K, Beni-Hessane A, Khaloufi H. Big healthcare data: preserving security and privacy. J Big Data. Jan 9, 2018;5(1). [FREE Full text] [CrossRef]
  29. Fernandes L, O'Connor M, Weaver V. Big data, bigger outcomes: healthcare is embracing the big data movement, hoping to revolutionize HIM by distilling vast collection of data for specific analysis. J AHIMA. Oct 2012;83(10):38-43; quiz 44. [Medline]
  30. Romanosky S, Hoffman D, Acquisti A. Empirical analysis of data breach litigation. J Empirical Legal Studies. Jan 17, 2014;11(1):74-104. [FREE Full text] [CrossRef]
  31. Miceli M, Posada J, Yang T. Studying up machine learning data: why talk about bias when we mean power? Proc ACM Hum Comput Interact. Jan 14, 2022;6(GROUP):1-14. [CrossRef]
  32. Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med. Jun 14, 2023;6(1):113. [FREE Full text] [CrossRef] [Medline]
  33. Gerke S, Minssen T, Cohen IG. Ethical and legal challenges of artificial intelligence-driven health care. In: Bohr A, Memarzadeh K, editors. Artificial Intelligence in Healthcare. Amsterdam, the Netherlands. Elsevier Academic Press; 2020:295-336.
  34. Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. In: Bohr A, Memarzadeh K, editors. Artificial Intelligence in Healthcare. Amsterdam, the Netherlands. Elsevier Academic Press; 2020:25-60.
  35. Stöger K, Schneeberger D, Holzinger A. Medical artificial intelligence. Commun ACM. Oct 25, 2021;64(11):34-36. [CrossRef]
  36. Pagallo U, O'Sullivan S, Nevejans N, Holzinger A, Friebe M, Jeanquartier F, et al. The underuse of AI in the health sector: opportunity costs, success stories, risks and recommendations. Health Technol (Berl). 2024;14(1):1-14. [FREE Full text] [CrossRef] [Medline]
  37. Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. May 2019;1(5):206-215. [FREE Full text] [CrossRef] [Medline]
  38. Müller H, Holzinger A, Plass M, Brcic L, Stumptner C, Zatloukal K. Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European In Vitro Diagnostic Regulation. N Biotechnol. Sep 25, 2022;70:67-72. [FREE Full text] [CrossRef] [Medline]
  39. Luxton DD. Artificial Intelligence in Behavioral and Mental Health Care. Amsterdam, the Netherlands. Elsevier Academic Press; 2016.
  40. Shortliffe E, Sepúlveda MJ. Clinical decision support in the era of artificial intelligence. JAMA. Dec 04, 2018;320(21):2199-2200. [FREE Full text] [CrossRef] [Medline]
  41. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. Jun 2019;6(2):94-98. [FREE Full text] [CrossRef] [Medline]
  42. Ofosu-Ampong K. Gender differences in perception of artificial intelligence-based tools. Journal of Digital Art and Humanities. Dec 13, 2023;4(2):52-56. [CrossRef]
  43. Bughin J, Hazan E, Ramaswamy S, Chui M, Allas T, Dahlström P, et al. Artificial intelligence: the next digital frontier? McKinsey Global Institute. Jun 2017. URL: https://tinyurl.com/4xdtjr7z [accessed 2024-05-03]
  44. Samoili S, López Cobo M, Gómez E, de Prato G, Martínez-Plumed F, Delipetrev B. AI watch. defining artificial intelligence. towards an operational definition and taxonomy of artificial intelligence. European Commission. 2020. URL: https:/​/publications.​jrc.ec.europa.eu/​repository/​bitstream/​JRC118163/​jrc118163_ai_watch.​_defining_artificial_intelligence_1.​pdf [accessed 2024-05-03]
  45. Scheetz J, Rothschild P, McGuinness M, Hadoux X, Soyer HP, Janda M, et al. A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci Rep. Mar 04, 2021;11(1):5193. [FREE Full text] [CrossRef] [Medline]
  46. Lorenzini G, Arbelaez Ossa L, Shaw DM, Elger BS. Artificial intelligence and the doctor-patient relationship expanding the paradigm of shared decision making. Bioethics. Jun 25, 2023;37(5):424-429. [CrossRef] [Medline]
  47. Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak. Apr 20, 2023;23(1):73. [FREE Full text] [CrossRef] [Medline]
  48. Mittelstadt B. The impact of artificial intelligence on the doctor-patient relationship. Council of Europe. 2021. URL: https://www.coe.int/en/web/bioethics/report-impact-of-ai-on-the-doctor-patient-relationship [accessed 2024-05-03]
  49. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. Dec 13, 2016;316(22):2402-2410. [CrossRef] [Medline]
  50. Pallua JD, Brunner A, Zelger B, Schirmer M, Haybaeck J. The future of pathology is digital. Pathol Res Pract. Sep 2020;216(9):153040. [FREE Full text] [CrossRef] [Medline]
  51. Zou J, Huss M, Abid A, Mohammadi P, Torkamani A, Telenti A. A primer on deep learning in genomics. Nat Genet. Jan 2019;51(1):12-18. [CrossRef] [Medline]
  52. Fjelland R. Why general artificial intelligence will not be realized. Humanit Soc Sci Commun. Jun 17, 2020;7(1):10. [FREE Full text] [CrossRef]
  53. McStay A. Emotional AI and EdTech: serving the public good? Learn Media Technol. Nov 05, 2019;45(3):270-283. [CrossRef]
  54. Verganti R, Vendraminelli L, Iansiti M. Innovation and design in the age of artificial intelligence. J Prod Innov Manage. Apr 22, 2020;37(3):212-227. [CrossRef]
  55. Price WN2, Cohen IG. Privacy in the age of medical big data. Nat Med. Jan 2019;25(1):37-43. [FREE Full text] [CrossRef] [Medline]
  56. Char DS, Shah NH, Magnus D. Implementing machine learning in health care - addressing ethical challenges. N Engl J Med. Mar 15, 2018;378(11):981-983. [FREE Full text] [CrossRef] [Medline]
  57. Liu M, Ning Y, Teixayavong S, Mertens M, Xu J, Ting DSW, et al. A translational perspective towards clinical AI fairness. NPJ Digit Med. Sep 14, 2023;6(1):172. [FREE Full text] [CrossRef] [Medline]
  58. Wischmeyer T. Artificial intelligence and transparency: opening the black box. In: Wischmeyer T, Rademacher T, editors. Regulating Artificial Intelligence. Cham, Switzerland. Springer; 2020:75-101.
  59. Nguyen TV, Dakka MA, Diakiw SM, VerMilyea MD, Perugini M, Hall JMM, et al. A novel decentralized federated learning approach to train on globally distributed, poor quality, and protected private medical data. Sci Rep. May 25, 2022;12(1):8888. [FREE Full text] [CrossRef] [Medline]
  60. Groves R, Presser S, Dipko S. The role of topic interest in survey participation decisions. Public Opin Q. Mar 1, 2004;68(1):2-31. [FREE Full text] [CrossRef]


AI: artificial intelligence


Edited by A Mavragani; submitted 24.10.23; peer-reviewed by L He, U Kanike, T Davidson, A Holzinger; comments to author 09.02.24; revised version received 21.02.24; accepted 03.04.24; published 05.06.24.

Copyright

©Ming Li, XiaoMin Xiong, Bo Xu, Conan Dickson. Originally published in JMIR Formative Research (https://formative.jmir.org), 05.06.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.