Abstract
The regulatory focus on premarket approval, overlooking and limiting the life cycle, real-world adaptiveness potential of artificial intelligence in health care, requires ongoing oversight and quality assurance by health care providers–cum-deployers, especially during early adoption.
JMIR Form Res 2025;9:e74038doi:10.2196/74038
Keywords
Introduction
This study explores how the specific nature of regulating health care artificial intelligence (AI) influences managing its deployment. Food and Drug Administration and European Commission regulations focus on premarket approval yet fail to address monitoring of AI postmarketing []. Better regulations for “adaptive” AI systems are needed. These are considered with precaution as “high-risk” and prohibited from fully autonomous diagnosis. As AI models are exposed to new data in clinical settings, their performance may degrade or alter over time, necessitating ongoing oversight [,]. This often means that health care providers must step into the regulatory ambiguity zone to develop local protocols for quality assurance during early adoption phases.
Methods
Overview
In-depth semistructured interviews (30‐80 min) were conducted with 65 (29 females, 36 males) managers and physicians (mainly radiologists) from 4 different Israeli hospitals. Participants, identified by hospital management, were voluntarily included as deployers of AI for medical imaging analysis. Using a grounded theory approach [], preliminary codes were identified based on the literature review, with additional themes identified through interview analysis. The research team did the coding in parallel to ensure interrater reliability and to discuss further emerging codes. Data analysis was completed upon agreement that thematic saturation was reached.
Ethical Considerations
This research was approved by Clalit Health Services Ethical Review Board (number 0001‐22-CNT-C). Participants signed an informed consent form; all details were anonymized to maintain privacy and confidentiality; no compensation was provided to participants.
Results
Overview
Many of our respondents criticized the absence of clear regulations for AI implementation, seen as challenging and at times confusing, yet also enabling experimentation.
The Precautionary Nature of AI Regulation in Health Care and Its Postdeployment Implications
Current premarket approval processes require AI-based medical devices to stop their retraining postdeployment. This reflects precautionary regulation regarding health care AI. Our respondents expressed worries about having to become the “AI’s QA [quality assurance] testers”: “I cannot always rely on the AI in trauma cases where the injection is not good enough and it does not detect heart failure...At nights I filter the hemorrhages alerts because it is over-sensitive...I wish I could teach it, talk to it, like a real team member” (N8).
AI systems were considered “overalerting,” and managers instructed physicians to customize the alert threshold according to their needs: “We got used to all sorts of false alerts. So, we rely more on our experience and knowledge, and we combine that with the system” (M6). Managers and local “champions” (“superuser” radiologists) had to become AI supervisors: ”I make everyone here provide feedback regarding the AI system” (M5). Playing an active role in shaping the use of AI systems was nevertheless seen positively: “We feel like we are part of the development team. It feels good. It makes me even more engaged” (R1).
Discussion
Our main study findings show how regulating health care AI influences managing its deployment by highlighting the need for customization of the AI tools that require experimentation. Strategies included vendor-appointed “champions” and on-site feedback loops []. Nevertheless, such customization requires further monitoring of bias metrics in model outputs and selective use by physicians.
There are some limitations to consider. Findings were drawn from interviews with radiologists, considered early adaptors of technologies, whose experiences may vary from that of other health providers. Although the study captures early adoption phases, future research should explore how this might shape long-term AI integration. In conclusion, the broader implications are that to become a sustainable approach in medical AI implementation, the “use first, trust later” mindset cannot be left unmonitored. Regulation should address adaptive AI and consider how transparency (rather than explainability) of the initial and postdeployment training, as well as bias monitoring, can enable oversight. Additionally, hospital administrators should work with AI developers to monitor biases in customization [], as well as manage effective teaming routines with AI.
Acknowledgments
The authors acknowledge support from the Israel Science Foundation grant number 1325/23. We are grateful to all our team members at the AI in Healthcare Organizations Lab [] and to the interviewees who participated in the study.
Data Availability
Anonymized transcripts are available from the corresponding author.
Authors' Contributions
AR: conceptualization, writing, formal analysis. YI: writing, formal analysis. ZP: review & editing, funding acquisition. BH: investigation, formal analysis, review & editing. ZI and OW: project administration.
Conflicts of Interest
None declared.
References
- Agarwal R, Dugas M, Gao G(. Augmenting physicians with artificial intelligence to transform healthcare: Challenges and opportunities. Economics Manag Strategy. Mar 2024;33(2):360-374. [CrossRef]
- Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Med. 2021;27(1):9-20.
- Lee P, Kummerfeld E, Yu KH. Regulatory oversight of artificial intelligence and machine learning in healthcare. J Healthc Inform Res. 2020;4(1):32-54.
- Gioia DA, Corley KG, Hamilton AL. Seeking Qualitative Rigor in Inductive Research. Organ Res Methods. Jan 2013;16(1):15-31. [CrossRef]
- Faulconbridge J, Sarwar A, Spring M. How Professionals Adapt to Artificial Intelligence: The Role of Intertwined Boundary Work. J Management Studies. Jul 2025;62(5):1991-2024. [CrossRef]
- Cross JL, Choma MA, Onofrey JA. Bias in medical AI. PLOS Digit Health. Jul 2024;3(11):e0000651. [CrossRef]
- AI in Healthcare Organizations Lab. URL: https://aihealthcarelab.com/ [Accessed 2025-11-03]
Abbreviations
| AI: artificial intelligence |
Edited by Amaryllis Mavragani, Javad Sarvestan; submitted 16.Mar.2025; peer-reviewed by Adeleke Adekola, Chibuzo Onah, Sridevi Wagle; final revised version received 23.Sep.2025; accepted 24.Sep.2025; published 10.Nov.2025.
Copyright© Aviad Raz, Yael Inbar, Ziv Paz, Barkan Hofman, Zalman Itzhakov, Orly Weinstein. Originally published in JMIR Formative Research (https://formative.jmir.org), 10.Nov.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.

