Original Paper
Abstract
Background: Clinicians are the interface between artificial intelligence (AI) applications and patient care. To maximize benefits and minimize risks of AI, clinicians must be “AI-ready”—that is, willing and able to understand, evaluate, and appropriately use AI tools in practice. Prior literature suggests that clinicians lack fundamental competencies in the use of AI. These gaps could be especially problematic in primary care, given its broad reach into patient care. We surveyed primary care providers (PCPs) in the United States’ largest integrated health care system, relatively early in its widespread implementation of clinical AI for frontline use, in order to identify readiness gaps that may warrant particular attention as part of a comprehensive AI implementation strategy.
Objective: The aim of this study was to characterize PCPs’ use, knowledge, attitudes, and training priorities related to AI in order to inform health system AI implementation efforts.
Methods: We conducted a national cross-sectional survey of United States Veterans Health Administration (VA) PCPs in October 2025, assessing AI use, self-reported knowledge, attitudes, and training experience. Descriptive analyses summarized responses with exploratory bivariate comparisons across clinician subgroups. Conventional content analysis with inductive coding was used to characterize open-ended responses providing a definition of AI.
Results: Among 170 respondents (170/989, 17.2% response rate), 66.5% (113/170) reported current AI use, most commonly generative AI and decision support tools. Overall attitudes toward AI were positive, with 70.6% (120/170) mostly enthusiastic or more enthusiastic than apprehensive. Confidence in understanding sources of AI bias (62/170, 36.5%) and ethical issues (81/170, 47.6%) was limited. When asked to define AI, very few respondents provided an accurate technical definition. Key concerns about use of AI included accountability, accuracy, and transparency. Though 88.2% (150/170) identified AI training as a priority, only 26.5% (45/170) had any training. Training experiences ranged widely in source, focus, and structure.
Conclusions: PCPs are eager to harness AI’s practical advantages but lack foundational competencies to do so in ways that maximize benefit and minimize risk. Our findings highlight a need for targeted education that prioritizes critical appraisal, workflow integration, and risk mitigation, supported by governance that addresses clinicians’ concerns and validated measures to evaluate progress toward an AI-ready workforce. These steps can empower PCPs to leverage AI safely and effectively and strengthen the quality and safety of primary care delivery at scale.
doi:10.2196/90641
Keywords
Introduction
Clinician readiness for artificial intelligence (AI) in health care is critical to realizing AI’s potential in practice. Clinicians are the interface between AI applications and patient care. They are responsible for interpreting and integrating AI outputs within the complex realities of individual patients, incomplete information, and clinical uncertainty. Even when AI systems are “trustworthy,” with rigorous risk assessment and mitigation built in, vulnerabilities remain at the point of clinical application where judgment, workflow integration, and accountability ultimately reside. To maximize benefits and minimize risks, clinicians must be “AI-ready”—that is, willing and able to understand, evaluate, and appropriately use AI tools in practice, as reflected by their knowledge and attitudes related to clinical AI.
AI introduces a unique set of concerns for clinicians, including accountability for errors, algorithmic bias, and potential erosion of the patient-clinician relationship [,]. Without sufficient knowledge or supportive attitudes, clinicians may fail to realize AI’s benefits. But of greater concern are threats to clinical quality, safety, ethics, patient trust, and clinician well-being. Recognizing these risks, health systems and professional organizations have identified an AI-ready clinical workforce as a top strategic priority [,].
AI use in clinical care is expanding rapidly, including ambient documentation [], image analysis, and point-of-care clinical guidance. However, existing literature suggests clinicians have limited understanding of AI fundamentals, data and algorithmic bias, and ethical and legal implications, alongside concerns about workflow integration and clinician-patient relationships [-]. These gaps may be particularly problematic in primary care. Primary care providers (PCPs) are the first point of contact for most patients and interact with a wide range of clinical, operational, and administrative technologies. Limited readiness among PCPs could therefore have broad implications for patient safety and quality of care.
We conducted a national survey of PCPs in the Veterans Health Administration (VA) to describe current AI use, characterize knowledge and attitudes regarding AI in clinical care, and identify priority areas for education and implementation support. We also explored variations in knowledge and attitudes across PCP subgroups. Findings on readiness components are intended to inform health system efforts to ready PCPs for safe, effective AI use.
Methods
Setting and Sample
The VA is the largest integrated health system in the United States, with over 9.1 million enrollees across 1380 facilities []. We conducted a cross-sectional survey of VA PCPs in October 2025. All VA PCPs were eligible. Using the VA Corporate Data Warehouse, we randomly selected 1000 PCPs and emailed invitations to complete an online survey using MS forms, with 2 reminders; 989 PCPs were successfully contacted. Participants were informed that identifying information would be used solely to link responses to demographic data and would be blinded for analysis.
Survey Development
Survey development was guided by Rogers’ Diffusion of Innovation Theory, which positions user knowledge and attitudes as key determinants of technology adoption []. We aimed to understand PCP readiness rather than the implementation context as a whole (ie, the organizational context for use). To develop the survey, we first reviewed measures in the literature that have been used to measure knowledge and attitudes toward AI among various groups, including the general population, employees, medical students, and health professionals [-]. None were well-suited to our goal of using a brief survey to measure high-level knowledge about clinical AI or attitudes toward use of AI in clinical care specifically. For example, measures were too detailed in their assessments of AI literacy for a group that has not undergone a formal curriculum [,], focused on negative attitudes toward AI [-] or examined user experience [-] with a single tool. We therefore drafted knowledge items based on topics commonly measured in literacy surveys that might reasonably be assessed via self-report, common attitudes about use of AI widely reported in the literature, and potential outcomes from use of AI in primary care. The initial pool of 20 items was reviewed, refined, and reduced to 14 items in total with input from AI and clinical leadership at the national and regional level (n=4). The final survey included an item assessing current AI tool use, 2 closed-ended and one open-ended item on self-assessed AI knowledge, 4 items on attitudes toward AI, and 4 items on training experience and priorities. Knowledge questions were intended as indicators of foundational AI literacy, and attitude questions captured perceived benefits and risks relevant to AI in clinical care.
Data Analysis
We calculated descriptive statistics and compared survey respondents to nonrespondents on age, gender, rurality, and clinical credential (physician vs other credential), using VA Corporate Data Warehouse data. We summarized patterns in AI use, knowledge, attitudes, and training priorities. Open-ended responses to the question asking for a definition of AI were analyzed using conventional content analysis. An inductive coding approach was used, allowing categories to emerge directly from the data rather than a predetermined framework. This process involved repeated reading of responses to categories of responses with iterative refinement into broader categories to characterize the range and depth of respondents’ understanding of AI concepts.
No inferential tests were performed given the exploratory nature of the study; however, we conducted planned bivariate comparisons across clinician subgroups to inform system-level targeting of training and implementation support. We examined whether responses on the 2 knowledge questions (“I feel confident describing potential sources of bias in AI applications” and “I have a good grasp of the ethical issues that may arise in my use of AI;” Strongly Agree/Agree vs Strongly Disagree/Disagree/Neither Agree nor Disagree) and on the overall attitude question (“What is your overall attitude toward using AI tools in your work as a PCP?”; Mostly Enthusiastic/More Enthusiastic than Apprehensive vs Mostly Apprehensive/More Apprehensive than Enthusiastic/Equally Enthusiastic and Apprehensive) differed by age (continuous variable), rural practice setting (yes/no), current use of any AI tools (yes/no), and prior AI training (yes/no).
Ethical Considerations
This project was determined to be quality improvement and not human subjects research by the VA Bedford Healthcare System Institutional Review Board.
Results
Participants’ Characteristics
A total of 170 PCPs completed the survey (170/989, 17.2% response rate). Respondents and nonrespondents did not differ significantly on demographic characteristics. The majority of the respondents were aged 40 to 59 years (110/170, 64.7%), female (115/170, 67.6%), and physicians (105/170, 61.8%); about one-third practiced in rural locations (). One-third (52/170, 30.6%) considered themselves among the earliest adopters of new technology in general, with another 51.2% (87/170) considering themselves among early (but not the earliest) adopters. Nearly half (80/170, 47.1%) of the respondents used VA GPT (the VA’s internal generative AI chat tool), 37.1% (63/170) used OpenEvidence (an AI-enabled clinical decision support platform), 11.2% (19/170) used Microsoft Copilot, and 33.5% (57/170) reported no current AI use.
| Characteristic, category | Values, n (%) | |
| Age group (y) | ||
| 30-39 | 27 (15.9) | |
| 40-49 | 52 (30.6) | |
| 50-59 | 58 (34.1) | |
| 60-69 | 29 (17.1) | |
| 70-79 | 3 (1.8) | |
| Missing | 1 (0.6) | |
| Rural practice location | ||
| Rural | 53 (31.2) | |
| Nonrural | 117 (68.8) | |
| Gender | ||
| Female | 115 (67.6) | |
| Male | 55 (32.4) | |
| Clinical credential | ||
| Physician (MD/DO) | 105 (61.8) | |
| Non-physician clinician (Nurse Practitioner, Physician Assistant, etc) | 65 (38.2) | |
| Current use of artificial intelligence toolsa | ||
| No current use | 57 (33.5) | |
| VAGPT | 80 (47.1) | |
| OpenEvidence | 63 (37.1) | |
| Ambient scribing/dictation using VAGPT | 14 (8.2) | |
| Microsoft Copilot | 19 (11.2) | |
| Other artificial intelligence tools | 12 (7.1) | |
| General stance toward new technology | ||
| First to try, willing to take risks | 52 (30.6) | |
| Embrace early after seeing potential impact | 87 (51.2) | |
| Wait until technology has proven value | 24 (14.1) | |
| Skeptical and cautious, adopt after most others | 6 (3.5) | |
| Last to adopt, prefer traditional approaches | 1 (0.6) | |
| Formal or informal artificial intelligence training | ||
| No training | 125 (73.5) | |
| Yes training | 45 (26.5) | |
a“Check all that apply” item; respondents could select multiple artificial intelligence tools.
Knowledge of AI
Confidence describing potential sources of bias in AI was mixed: 36.5% (62/170) agreed or strongly agreed that they were confident, 41.8% (71/170) were neutral, and 21.2% (36/170) disagreed. Confidence in grasping ethical issues arising from AI use was somewhat higher, with 47.6% (81/170) agreeing or strongly agreeing, 32.9% (56/170) neutral, and 18.8% (32/170) disagreeing or strongly disagreeing. Increasing age was positively associated with more confidence describing potential sources of bias.
Respondents were asked “Based on your current level of familiarity, how would you define AI technology in a few words?” The breadth of content across responses ranged widely, containing elements of definitions but also descriptions of how AI works, use cases, perceived impact, and attitudes toward AI. While very few responses were technical definitions that referenced AI’s ability to simulate human intelligence, one participant defined AI as “A computer system designed to perform tasks that typically require human intelligence such as reasoning, learning, and decision making. It works through algorithms and analyzing databases.” When definitions were provided, they were more often nontechnical: “a system that uses data and engineering to produce solutions.” Several responses defined AI via a use case: “A way to summarize the visit and create a template for review to save time on tedious documentation.” Others defined AI in terms of impact, both positive “a time saver” and negative “horrific as it is a HUGE concern for patient privacy.” Still others simply expressed a lack of knowledge: “Not sure of exactly what it is. My guess is like ChatGPT that it can help with writing, looking for clinical diagnosis, etc.”
Attitudes Toward AI
Most respondents expressed positive attitudes toward AI tools in clinical work: 38.8% (66/170) were mostly enthusiastic, 31.8% (54/170) more enthusiastic than apprehensive, and 18.8% (32/170) equally enthusiastic and apprehensive. Only 3.5% (6/170) were mostly apprehensive. A more positive overall attitude toward AI was associated with current use of AI, but not age, rurality, or AI training.
When asked to select the 3 positive outcomes about which they were most enthusiastic, respondents most frequently selected reduced time on administrative tasks (133/170, 78.2%), more informed clinical decision-making (96/170, 56.5%), and increased face-to-face time with patients (75/170, 44.1%) ().

Respondents were also asked to indicate the three use cases about which they were most enthusiastic (). Most frequently selected were automated clinical documentation (128/170, 75.3%), clinical data extraction (106/170, 62.4%), and point-of-care clinical guidance (65/170, 38.2%).

Regarding potential concerns, respondents were “moderately apprehensive” or “very apprehensive” about accountability and liability (123/170, 72.4%), accuracy of outputs (117/170, 68.8%), and transparency in how AI reached a decision (113/170, 66.5%) (). Of less concern were job security (66/170, 38.8%), workflow integration (64/170, 37.6%), and impact on patient-clinician relationships (55/170, 32.4%).

Training Experience and Priorities
Only 26.5% (45/170) reported having any formal or informal AI training, with sources ranging from YouTube and in-service sessions to Google AI and formal coursework. Training was a moderate or high priority for 88.2% (150/170). For their top 3 training priorities, 78.2% (133/170) selected workflow integration; 58.8% (100/170) selected how to evaluate output for accuracy, safety, and bias; and 49.4% (84/170) selected tool-specific training. Foundational AI knowledge (74/170, 43.5%) and ethical, legal, and regulatory issues (78/170, 45.9%) were also commonly prioritized. Communication about AI with patients was least frequently prioritized (17/170, 10%).
Taken together, these results show that PCPs are actively using and optimistic about AI tools, yet experience notable gaps in foundational knowledge, evaluative skills, and training exposure.
Discussion
This national survey of VA PCPs reveals a promising foundation for successful AI implementation in primary care. Respondents expressed strong enthusiasm for tools that could improve workflow efficiency and enhance decision-making, especially in high-value use cases such as automated documentation, data extraction, and point-of-care guidance. Many respondents had already adopted AI chat or decision support tools. These findings highlight meaningful openness to AI innovation in primary care and suggest that health systems could capitalize on existing clinician interest and experimentation with AI at the front lines.
At the same time, respondents reported uneven knowledge about AI fundamentals and limited confidence addressing bias and ethical issues, which are domains central to safe and responsible AI use. Qualitative responses underscore the limited technical understanding of AI among many clinicians. Apprehension regarding liability, accuracy, and transparency suggests that clinicians recognize key risks and feel ill-equipped to evaluate or mitigate them. Training experience was limited and heterogeneous despite nearly universal interest. Together, these findings illuminate a tension, which is that PCPs are eager to harness AI’s practical advantages but lack foundational competencies to do so in ways that maximize benefit and minimize risk.
Our results align with prior studies showing clinicians’ rapid AI uptake alongside uncertainty around the implications of use, including technical evaluation, risk mitigation, and professional responsibilities [-]. Our study extends existing research by focusing on primary care—a setting with broad clinical reach and substantial implications for quality and safety—and by examining a single, integrated health system actively pursuing AI readiness. By identifying specific gaps in knowledge, attitudes, and training, our findings point to levers that health system leaders directly influence through implementation support and governance.
The VA and other health systems have prioritized developing an “AI-ready” workforce []. Our findings suggest concrete steps toward this goal. Standardized, clinician-focused training on foundational AI concepts, paired with policies requiring baseline AI literacy and tool-specific training, could help establish consistent expectations and competency across a health system. Many states mandate continuing medical education in high-impact areas such as end-of-life care, risk management, and electronic health record usage. Given that AI carries comparable implications for patient safety, quality of care, and workflow, similar mandatory educational requirements may be warranted as AI’s role in care delivery expands. Validated measures of AI readiness could further support needs assessment, targeted training and implementation efforts, and evaluation over time. Beyond education, governance structures can establish expectations for transparency, accountability, and workflow integration.
Concerns that were lower ranked on the survey remain important for comprehensive AI implementation, such as communicating about AI with patients, considering impact on patient-clinician relationships, and using AI for outreach support and population health management. Respondents may have prioritized these concerns lower because they are less immediate than daily workflow demands. Nonetheless, focused educational content can raise awareness of these important issues without overwhelming core training needs. A tiered approach to training could address immediate workflow and safety concerns first while introducing relational and population health considerations as AI use matures in practice.
Although this study focuses on VA primary care, several findings likely generalize beyond this setting. VA PCPs practice within an integrated health system that offers relatively consistent access to new technologies, training resources, and technical support. These features may facilitate earlier exposure to AI tools compared with some non-VA settings where access to these resources may be more variable. At the same time, our findings suggest that even in a system with substantial organizational capacity, clinicians still report uneven foundational knowledge and limited preparedness to evaluate potential risks, suggesting that these challenges may be at least as notable in more resource-constrained environments. Similar findings have been reported outside of the United States, showing that clinicians across diverse health systems express enthusiasm for AI but face gaps in readiness [,,]. The parallels within the VA context and outside of it emphasize a broader need for health systems to invest in developing AI readiness among their clinicians in the interest of supporting safe and effective AI use.
This study has limitations. While our response is consistent with other VA clinician surveys [-] and respondents resembled nonrespondents demographically, technologically enthusiastic respondents may be overrepresented. Knowledge gaps and apprehensions in the population of nonrespondents may be even more pronounced, which would only emphasize the importance of readiness efforts to optimize clinicians’ AI use. Also, our subgroup analyses are intended to be hypothesis-generating rather than definitive; future studies should examine subgroup differences with finer grain.
VA PCPs demonstrate strong enthusiasm for AI tools promising workflow efficiencies and clinical support and report current use of generative AI and decision support technologies. However, they also report limited training and uneven knowledge in foundational and evaluative domains. These findings highlight a need for targeted education that prioritizes critical appraisal, workflow integration, and risk mitigation, supported by governance that addresses clinicians’ concerns and validated measures of progress toward an AI-ready workforce. By providing one of the first system-wide assessments of PCP’s AI readiness within an integrated health system that is itself relatively early in its AI implementation journey, this study offers novel insights into how frontline clinicians are engaging with AI, where critical gaps in readiness exist, and which levers health systems can use to enable safe and effective AI adoption that optimizes patient care while minimizing risks. Addressing these gaps will enable health systems to capitalize on clinician openness while strengthening the quality and safety of primary care delivery at scale.
Data Availability
The datasets generated or analyzed during this study are available from the corresponding author on reasonable request.
Disclaimer
The contents of this paper do not represent the views of Veterans Health Administration or the US Government.
Funding
This material is the result of work supported with resources and the use of facilities at the Veterans Health Administration Bedford Healthcare System.
Authors' Contributions
Conceptualization: VGV
Data curation: AJL
Formal analysis: AJL, BK
Supervision: VGV
Visualization: AJL
Writing – original draft: VGV
Writing – review and editing: VGV, AJL, BK
Conflicts of Interest
None declared.
References
- Weiner EB, Dankwa-Mullan I, Nelson WA, Hassanpour S. Ethical challenges and evolving strategies in the integration of artificial intelligence into clinical practice. PLOS Digit Health. Apr 2025;4(4):e0000810. [CrossRef] [Medline]
- Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak. Apr 20, 2023;23(1):73. [FREE Full text] [CrossRef] [Medline]
- Weidener L, Fischer M. Role of ethics in developing AI-based applications in medicine: insights from expert interviews and discussion of implications. JMIR AI. Jan 12, 2024;3:e51204. [FREE Full text] [CrossRef] [Medline]
- Moldt J, Festl-Wietek T, Fuhl W, Zabel S, Claassen M, Wagner S, et al. Assessing AI awareness and identifying essential competencies: insights from key stakeholders in integrating AI into medical education. JMIR Med Educ. Jun 12, 2024;10:e58355. [FREE Full text] [CrossRef] [Medline]
- Tierney AA, Gayre G, Hoberman B, Mattern B, Ballesca M, Wilson Hannay SB, et al. Ambient artificial intelligence scribes: learnings after 1 year and over 2.5 million uses. NEJM Catalyst. Apr 16, 2025;6(5):1. [CrossRef]
- Scott IA, Carter SM, Coiera E. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform. Dec 2021;28(1):e100450. [FREE Full text] [CrossRef] [Medline]
- Barry B, Zhu X, Behnken E, Inselman J, Schaepe K, McCoy R, et al. Provider perspectives on artificial intelligence-guided screening for low ejection fraction in primary care: qualitative study. JMIR AI. Oct 14, 2022;1(1):e41940. [FREE Full text] [CrossRef] [Medline]
- Sumner J, Mohamed Ali J, Motani M, Ang A, Ho D, Mukhopadhyay A. Artificial intelligence guided dosing decisions: a qualitative study on health care provider perspectives. BMJ Health Care Inform. Sep 30, 2025;32(1):e101461. [CrossRef] [Medline]
- Nash DM, Thorpe C, Brown JB, Kueper JK, Rayner J, Lizotte DJ, et al. Perceptions of artificial intelligence use in primary care: a qualitative study with providers and staff of Ontario community health centres. J Am Board Fam Med. Mar 22, 2023;36(2):221-228. [CrossRef]
- U.S. Department of Veterans Affairs. About VHA. Veterans Health Administration. URL: https://www.va.gov/health/aboutVHA.asp
- Rogers EM, Singhal A, Quinlan MM. Diffusion of innovations. In: An Integrated approach to communication theory and research. An integrated approach to communication theory and research. Routledge; 2014:432-438.
- Laupichler MC, Aster A, Perschewski J, Schleiss J. Evaluating AI courses: a valid and reliable instrument for assessing artificial-intelligence learning through comparative self-assessment. Education Sciences. Sep 26, 2023;13(10):978. [CrossRef]
- Karaca O, Çalışkan SA, Demir K. Medical artificial intelligence readiness scale for medical students (MAIRS-MS) - development, validity and reliability study. BMC Med Educ. Feb 18, 2021;21(1):112. [FREE Full text] [CrossRef] [Medline]
- Sindermann C, Sha P, Zhou M, Wernicke J, Schmitt HS, Li M, et al. Assessing the attitude towards artificial intelligence: introduction of a short measure in German, Chinese, and English language. Künstl Intell. Sep 23, 2020;35(1):109-118. [CrossRef]
- Sallam M, Al-Mahzoum K, Almutairi Y, Alaqeel O, Abu Salami A, Almutairi Z, et al. Anxiety among medical students regarding generative artificial intelligence models: a pilot descriptive study. IME. Oct 09, 2024;3(4):406-425. [FREE Full text] [CrossRef]
- Sallam M, Al-Mahzoum K, Alaraji H, Albayati N, Alenzei S, AlFarhan F, et al. Apprehension toward generative artificial intelligence in healthcare: a multinational study among health sciences students. Front. Educ. May 1, 2025;10:1. [CrossRef]
- Terzi R. An adaptation of artificial intelligence anxiety scale into Turkish: Reliability and validity study. International Online Journal of Education and Teaching. Oct 01, 2020:1501-1515. [FREE Full text]
- Zhai H, Yang X, Xue J, Lavender C, Ye T, Li J, et al. Radiation oncologists' perceptions of adopting an artificial intelligence-assisted contouring technology: model development and questionnaire study. J Med Internet Res. Sep 30, 2021;23(9):e27122. [FREE Full text] [CrossRef] [Medline]
- Peres SC, Pham T, Phillips R. Validation of the System Usability Scale (SUS). Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Sep 30, 2013;57(1):192-196. [CrossRef]
- Schnall R, Cho H, Liu J. Health Information Technology Usability Evaluation Scale (Health-ITUES) for usability assessment of mobile health technology: validation study. JMIR Mhealth Uhealth. Jan 05, 2018;6(1):e4. [FREE Full text] [CrossRef] [Medline]
- Building the future: VA’s strategy for adopting high-impact artificial intelligence to improve services for veterans. US Department of Veterans Affairs. URL: https://department.va.gov/ai/building-the-future-vas-strategy-for-adopting-high-impact-artificial-intelligence-to-improve-services-for-veterans/ [accessed 2026-04-03]
- Scott IA, van der Vegt A, Lane P, McPhail S, Magrabi F. Achieving large-scale clinician adoption of AI-enabled decision support. BMJ Health Care Inform. May 30, 2024;31(1):e100971. [FREE Full text] [CrossRef] [Medline]
- Truong NM, Vo TQ, Tran HTB, Nguyen HT, Pham VNH. Healthcare students' knowledge, attitudes, and perspectives toward artificial intelligence in the southern Vietnam. Heliyon. Dec 2023;9(12):e22653. [FREE Full text] [CrossRef] [Medline]
- Vimalananda VG, Kragen B, Leibowitz AJ, Qian S, Wormwood J, Linsky AM, et al. Determinants of implementation of continuous glucose monitoring for patients with insulin-treated type 2 diabetes: a national survey of primary care providers. BMC Prim Care. Mar 08, 2025;26(1):68. [CrossRef] [Medline]
- Linsky A, Simon SR, Stolzmann K, Bokhour BG, Meterko M. Prescribers' perceptions of medication discontinuation: survey instrument development and validation. Am J Manag Care. Nov 2016;22(11):747-754. [FREE Full text] [Medline]
- Davila H, Rosen A, Stolzmann K, Zhang L, Linsky A. Factors influencing providers' willingness to deprescribe medications. J Am Coll Clin Pharm. Oct 14, 2021;5(1):15-25. [FREE Full text] [CrossRef]
Abbreviations
| AI: artificial intelligence |
| PCP: primary care provider |
| VA: Veterans Health Administration |
Edited by I Steenstra, A Mavragani; submitted 31.Dec.2025; peer-reviewed by H Soesanto, DG Goldberg; comments to author 18.Feb.2026; revised version received 12.Mar.2026; accepted 13.Mar.2026; published 22.Apr.2026.
Copyright©Varsha G Vimalananda, Ben Kragen, Alison J Leibowitz. Originally published in JMIR Formative Research (https://formative.jmir.org), 22.Apr.2026.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.

