@Article{info:doi/10.2196/64266, author="Ackerhans, Sophia and Wehkamp, Kai and Petzina, Rainer and Dumitrescu, Daniel and Schultz, Carsten", title="Perceived Trust and Professional Identity Threat in AI-Based Clinical Decision Support Systems: Scenario-Based Experimental Study on AI Process Design Features", journal="JMIR Form Res", year="2025", month="Mar", day="26", volume="9", pages="e64266", keywords="artificial intelligence; clinical decision support systems; explainable artificial intelligence; professional identity threat; health care; physicians; perceptions; professional identity", abstract="Background: Artificial intelligence (AI)--based systems in medicine like clinical decision support systems (CDSSs) have shown promising results in health care, sometimes outperforming human specialists. However, the integration of AI may challenge medical professionals' identities and lead to limited trust in technology, resulting in health care professionals rejecting AI-based systems. Objective: This study aims to explore the impact of AI process design features on physicians' trust in the AI solution and on perceived threats to their professional identity. These design features involve the explainability of AI-based CDSS decision outcomes, the integration depth of the AI-generated advice into the clinical workflow, and the physician's accountability for the AI system-induced medical decisions. Methods: We conducted a 3-factorial web-based between-subject scenario-based experiment with 292 medical students in their medical training and experienced physicians across different specialties. The participants were presented with an AI-based CDSS for sepsis prediction and prevention for use in a hospital. Each participant was given a scenario in which the 3 design features of the AI-based CDSS were manipulated in a 2{\texttimes}2{\texttimes}2 factorial design. SPSS PROCESS (IBM Corp) macro was used for hypothesis testing. Results: The results suggest that the explainability of the AI-based CDSS was positively associated with both trust in the AI system ($\beta$=.508; P<.001) and professional identity threat perceptions ($\beta$=.351; P=.02). Trust in the AI system was found to be negatively related to professional identity threat perceptions ($\beta$=--.138; P=.047), indicating a partially mediated effect on professional identity threat through trust. Deep integration of AI-generated advice into the clinical workflow was positively associated with trust in the system ($\beta$=.262; P=.009). The accountability of the AI-based decisions, that is, the system required a signature, was found to be positively associated with professional identity threat perceptions among the respondents ($\beta$=.339; P=.004). Conclusions: Our research highlights the role of process design features of AI systems used in medicine in shaping professional identity perceptions, mediated through increased trust in AI. An explainable AI-based CDSS and an AI-generated system advice, which is deeply integrated into the clinical workflow, reinforce trust, thereby mitigating perceived professional identity threats. However, explainable AI and individual accountability of the system directly exacerbate threat perceptions. Our findings illustrate the complex nature of the behavioral patterns of AI in health care and have broader implications for supporting the implementation of AI-based CDSSs in a context where AI systems may impact professional identity. ", issn="2561-326X", doi="10.2196/64266", url="https://formative.jmir.org/2025/1/e64266", url="https://doi.org/10.2196/64266" }