Published on in Vol 8 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/56797, first published .
ChatGPT Use Among Pediatric Health Care Providers: Cross-Sectional Survey Study

ChatGPT Use Among Pediatric Health Care Providers: Cross-Sectional Survey Study

ChatGPT Use Among Pediatric Health Care Providers: Cross-Sectional Survey Study

Original Paper

1Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States

2Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States

3Division of Hematology/Oncology, The Hospital for Sick Kids, Toronto, ON, Canada

4Department of Pediatrics, The University of Toronto, Toronto, ON, Canada

5Children's Healthcare of Atlanta, Atlanta, GA, United States

6School of Medicine, Emory University, Atlanta, GA, United States

7Department of Pediatrics, Harvard Medical School, Boston, MA, United States

8Atrius Health, Newton, MA, United States

9Pediatric Physicians' Organization at Children's Hospital, Wellesley, MA, United States

10Computational Health Informatics Program, Boston Children's Hospital, Boston, MA, United States

Corresponding Author:

Susannah Kisvarday, MD, MPH

Division of General Pediatrics

Boston Children's Hospital

300 Longwood Avenue

Boston, MA, 02115

United States

Phone: 1 5704283137

Email: susannah.kisvarday@childrens.harvard.edu


Background: The public launch of OpenAI’s ChatGPT platform generated immediate interest in the use of large language models (LLMs). Health care institutions are now grappling with establishing policies and guidelines for the use of these technologies, yet little is known about how health care providers view LLMs in medical settings. Moreover, there are no studies assessing how pediatric providers are adopting these readily accessible tools.

Objective: The aim of this study was to determine how pediatric providers are currently using LLMs in their work as well as their interest in using a Health Insurance Portability and Accountability Act (HIPAA)–compliant version of ChatGPT in the future.

Methods: A survey instrument consisting of structured and unstructured questions was iteratively developed by a team of informaticians from various pediatric specialties. The survey was sent via Research Electronic Data Capture (REDCap) to all Boston Children’s Hospital pediatric providers. Participation was voluntary and uncompensated, and all survey responses were anonymous. 

Results: Surveys were completed by 390 pediatric providers. Approximately 50% (197/390) of respondents had used an LLM; of these, almost 75% (142/197) were already using an LLM for nonclinical work and 27% (52/195) for clinical work. Providers detailed the various ways they are currently using an LLM in their clinical and nonclinical work. Only 29% (n=105) of 362 respondents indicated that ChatGPT should be used for patient care in its present state; however, 73.8% (273/368) reported they would use a HIPAA-compliant version of ChatGPT if one were available. Providers’ proposed future uses of LLMs in health care are described.

Conclusions: Despite significant concerns and barriers to LLM use in health care, pediatric providers are already using LLMs at work. This study will give policy makers needed information about how providers are using LLMs clinically.

JMIR Form Res 2024;8:e56797

doi:10.2196/56797

Keywords



The public launch of ChatGPT by OpenAI in November 2022 generated immediate interest from a wide range of professionals as well as the public. The chatbot is based on a generative pretrained transformer large language model (GPT LLM) that processes large amounts of text to create an artificial intelligence (AI) system, which can then be repurposed across a variety of domains and diverse tasks [1]. Researchers and health care organizations have begun investigating how LLMs could be used and adapted within the medical field.

Some of the emerging potential applications of LLMs in medicine have included knowledge retrieval, which OpenAI has used in its own promotional material, as well as clinical decision support, medical note-taking, and composing patient communications [2-5]. However, potential problems with the application of LLMs to medicine are still not well understood. One early concern has been the phenomenon of AI “hallucinations,” in which the LLM can unpredictably return plausible sounding but incorrect or nonsensical answers to prompts [6,7]. Additionally, the content and phrasing of questions/prompts can significantly influence the LLM’s output, potentially impacting reliability [8-10]. Concerningly, the publicly available version of ChatGPT has been shown to have a diagnostic error rate of 83% in pediatric cases [11]. Other concerns regarding the clinical use of LLMs include privacy risks, lack of transparency, and bias perpetuation. In response, many institutions have begun drafting guidelines for using ChatGPT and other generative AI tools in health care settings [12].

The extent to which LLMs are already being used in health care is unclear. There is a need for better assessment of the current use practices, future intended uses, and general knowledge of health care providers regarding generative AI tools. Understanding the concerns and perspectives of clinical providers will be valuable in guiding the future development of both the AI tools themselves as well as the guidelines and principles for the use of such tools in health care.

Early surveys of ChatGPT/LLM use have focused broadly on uses beyond clinical care, have assessed perceptions rather than use, or have been limited in scope [13-17]. There is a dearth of information about health care providers’ perceptions of the benefits and barriers of LLMs in health care, and no study has assessed how pediatric providers are already using publicly available tools in their clinical and nonclinical work. This study sought to describe the knowledge and use of LLMs such as ChatGPT by physicians and advanced practice providers at a large, academic, free-standing children’s hospital.


Study Design

This study used a cross-sectional survey design to explore clinicians’ knowledge and use of LLMs such as ChatGPT. The survey instrument was developed using a modified Delphi method by an expert panel of 10 physician informaticians from various pediatric specialties; survey questions were collaboratively codesigned with multiple rounds of feedback until consensus was achieved. The survey consisted of a series of structured and unstructured questions and used adaptive questioning; the full survey, including the adaptive questioning logic, is available in Multimedia Appendix 1. The survey was pilot-tested on a group of 5 pediatricians prior to being sent to the entire target population.

Ethical Considerations

The study protocol, survey, and recruitment tool were granted full approval by the Boston Children’s Hospital Institutional Review Board (#P00045317). Participation in the survey was voluntary and uncompensated; respondents were informed of the purpose of the study and assured of their anonymity. The informed consent and privacy and confidentiality protection language is provided in Multimedia Appendices 1 and 2.

Recruitment

This closed survey study was conducted at Boston Children’s Hospital, a large academic urban pediatric health care system. The target sample was all Boston Children’s Hospital physicians and advanced practice providers in both hospital and outpatient clinic settings. Recruitment emails (see Multimedia Appendix 2) were sent via Research Electronic Data Capture (REDCap), a secure, web-based data capture application, which was also used for survey administration [18,19]. Survey recruitment and data collection started on October 12, 2023, and extended through November 14, 2023. Reminder emails were sent via REDCap to nonrespondents a maximum of 2 times.

Survey responses were analyzed in aggregate with minimum subgroup sizes of 10 responses to minimize the risk of reidentification of participants. Four survey questions included an “other” free-text option to capture concepts not covered by the provided answer choices. These free-text responses were analyzed qualitatively using the following methods. For each question, the predetermined discrete survey responses were used as a provisional codebook, which was expanded through inductive content analysis of the free-text responses [20]. To create the expanded codebook, 2 researchers (DJK and NR), one of whom was not involved with the data acquisition process, reviewed free-text responses and generated additional codes through an iterative process involving consensus meetings until no new codes were identified [21,22]. Coding conflicts were resolved by a third researcher (JDH). This expanded codebook was then applied to categorize free-text survey responses. Following coding, the 3 researchers (DJK, NR, and JDH) organized the coded responses into broader themes through an iterative process of consensus meetings.

Subgroup Analysis

The χ2 test of independence, followed by a posthoc analysis of adjusted residuals, was used to assess differences in survey question responses across demographic variables, including age, gender, race/ethnicity, and clinical role.


Demographic Characteristics

Surveys were sent to a total of 3127 physicians and advanced practice providers via email; we received 390 (12.5%) completed survey responses. As shown in Table 1, most respondents self-identified as female (n=293, 76.3%), White or European (n=324, 83.7%), and either an attending physician (n=165, 42.4%) or advanced practice provider (n=110, 28.3%).

Table 1. Characteristics of respondents who voluntarily participated in the ChatGPT/large language model survey during the survey data collection period (N=390).
CharacteristicRespondents, n (%)
Gender

Female293 (76.3)

Male85 (22.1)

Nonbinary2 (0.5)

Prefer not to answer4 (1.0)
Age (years)

≤2924 (6.2)

30-39134 (36.2)

40-49110 (28.3)

50-5969 (17.7)

60-6934 (8.7)

≥7011 (2.8)
Race/ethnicity

American Indian or Alaska Native1 (0.3)

Asian or Asian American36 (9.3)

Black or African American13 (3.4)

Hispanic or Latino/a/e14 (3.6)

Native Hawaiian or Pacific Islander0 (0)

White or European324 (83.7)

Something else4 (1.0)

Prefer not to say12 (3.1)
Role

Attending physician165 (42.4)

Advanced practice provider110 (28.3)

Resident/fellow32 (8.2)

Other82 (21.1)
Specialty

Anesthesia14 (3.6)

Emergency medicine20 (5.1)

General pediatrics59 (15.2)

Pediatric subspecialty149 (38.3)

Radiology7 (1.8)

Pathology2 (0.5)

Surgical specialty50 (12.9)

Other88 (22.6)

Familiarity With and Current Use of ChatGPT

Among the 390 respondents, 288 (73.7%) indicated that they were familiar with ChatGPT or another LLM and an additional 83 (21.2%) indicated that they had heard of ChatGPT but did not really know what it is. Only 19 (4.9%) respondents reported that they had not heard of ChatGPT. Of those who had heard of ChatGPT (n=371), 197 (53.1%) had used ChatGPT or a similar model. Only 52 (26.7%) of 195 respondents who were using an LLM had used it clinically. Reported clinical uses of LLMs are shown in Table 2; the most common uses were drafting school and work letters and drafting prior authorizations.

Overall, 72.4% (142/196) of question respondents reported using LLMs for nonclinical work. Nonclinical use cases included drafting emails (55/139 39.6%); creating outlines for grants, papers, or teaching materials (52/139, 37.4%); drafting a letter of recommendation (51/139, 36.7%); and writing code (eg, for statistical analysis or data visualization) (18/139 12.9%). Respondent free-text uses that were listed in “other” are shown in Table 3.

Most respondents did not think ChatGPT should be used for patient care in its present state (256/390, 70.9%). Listed concerns were accuracy or reliability (319/390, 87.2%), patient privacy or security (237/390, 64.8%), unclear how ChatGPT makes decisions (232/390, 63.4%), lack of regulation (225/390, 61.5%), and potential bias in the data model (219/390, 59.8%). Free-text responses for this question are shown in Table 3.

Table 2. Clinical use cases endorsed by survey respondents who indicated that they had used ChatGPT or a large language model (LLM) in their clinical work (n=52).
Responses to “How have you used ChatGPT/LLM in your clinical work (select all that apply)?”Respondents, n (%)
Draft school or work letter26 (50)
Draft prior authorization18 (35)
Generate patient education materials15 (29)
Generate differential diagnosis12 (23)
Other12 (23)
Ask a specific clinical question (not mentioned above)11 (21)
Draft all or part of a clinical note8 (15)
Suggest a treatment plan8 (15)
Respond to patient inbox messages5 (10)
Draft all or part of a discharge summary3 (6)
Draft handoff documentation3 (6)
Table 3. Free-text responses collected for 4 questions regarding participant use of and beliefs about large language models in clinical and nonclinical work.
Coded survey responseaRespondents, nExample
How have you used ChatGPT to help you with your non-clinical work?

Information search12It is a helpful adjunct to online searching. Helps you quickly narrow what you are looking for (assuming you don\'t want a broad search)

Plan recreational activity8Used for coming up with ideas for nonwork-related group activities

Summarize text5Summarize review papers into usable notes.

Revise communication5Editing text for grammar

Literature review3Triage/screen PubMed abstracts to identify references of interest

Generate title3Generate catchy titles for manuscripts and presentations.

Ideation3Generate research ideas

Draft mass communication3social media for my business

Creative writing2Write poems (in English and other languages), generate ideas

Well-being programming2Generate relaxation scripts.

Workflow1Write workflow proposals

Draft cover letter1Write … cover letters

Translation1translation of materials from English to another language

Task management1organizing to-do lists
What concerns do you have about using ChatGPT clinically?

Perceived lack of utility5Still not clear on how it would be used in healthcare

Potential bias in the data model3At times, ChatGPT is confounded by the presence of wrong data and, therefore, presents clearly inaccurate statements.

Legal3legal concerns- I am so careful about my documentation, and I just don\'t think chat GPT will ever word things the way I need it to help me in medicolegal situations.

Automation bias2Worried about clinician interpretation of ChatGPT output … and cannot replace clinical reasoning

Plagiarism2It is plagiarism on steroids.

Learning curve1Would not like to have to master a new technology in addition to the onslaught of requests for computer interface as it is

Depersonalization1That it could take away from collaborative development of an illness explanation that provider and patient/family engage in together.

Skill atrophy1Humans writing reports allows clinicians to integrate data in a way that supports clinical decision making and patient counseling. I am already finding a lack of critical thinking skills in graduate students. Push button documentation would be efficient (and report writing is arduous) but we all still need to think.
How would you use the Boston Children’s Hospital HIPAAb-compliant version of ChatGPT if it were available?

Research6My AI robot will ... learn to sort data in redcap

Translation3I use it probably in the most simple of ways to translate patient handouts into their language.

Summarize clinical narrative3Summarization of complex patient medical history and relevant clinical information and other data aggregation tasks (e.g., ascertain primary/longitudinal care team members involved in patient\'s care)

Extract data from narratives1Review patient charts and imaging reports to generate tabular data for research.

Workflow1Lots of potential nonclinical purposes, describing workflow, responsibility mapping
If a HIPAA-compliant version of ChatGPT were available, what types of information would you feel comfortable entering?

Demographic data1Name of patient’s school

Patient medications1Info related creating a prior auth insurance, medication, dx, etc.

aThe responses were analyzed and codified by a team of 3 physicians using formal qualitative methodology.

bHIPAA: Health Insurance Portability and Accountability Act.

Future Use of ChatGPT

Among the 367 respondents, 272 (74.1%) indicated they would use a version of ChatGPT if one were available that was compliant with the Health Insurance Portability and Accountability Act (HIPAA) [23]. Table 4 shows examples of how they envisioned using a HIPAA-compliant version of ChatGPT. If such a model were available, most participants indicated that they would feel comfortable entering patient diagnoses (223/266, 83.9%), age (188/266, 70.7%), and clinical questions without patient information (186/266, 69.9%); few would feel comfortable entering patient name (101/266, 38.0%), medical record number (87/266, 33%), date of birth (98/266, 37%), or whole notes from a patient chart (99/266, 37%). Table 3 shows the results of our analysis for free-text responses.

Table 4. Clinical use cases envisioned by pediatric providers who responded to the survey question about how they would use a Health Insurance Portability and Accountability Act (HIPAA)–compliant version of ChatGPT if one were available (n=270).
Responses to “How would you use a HIPAA compliant version of ChatGPT (select all that apply)?”Respondents, n (%)
Draft school or work letter209 (77)
Generate patient education materials191 (71)
Draft all or part of a clinical note169 (63)
Draft prior authorization159 (59)
Ask ChatGPT a specific clinical question114 (42)
Draft all or part of a discharge summary104 (39)
Generate differential diagnosis99 (37)
Respond to patient inbox messages87 (32)
Suggest a treatment plan76 (28)
Draft handoff documentation72 (27)
Other23 (9)

Subgroup Analysis

Trainees were more likely to have used ChatGPT than other respondents (P=.01); the percentage who endorsed using ChatGPT clinically was higher among trainees (40%) than nontrainees (24%), but the values in this question did not reach statistical significance (P=.06). Male respondents were also more likely to have used ChatGPT (P=.005). Respondents in the ≤29 years age group were more likely to be familiar with ChatGPT and those in the ≥70 years age group were less likely (P=.002). Responses to the rest of the survey questions did not significantly differ across demographics. Specifically, there was no statistically significant difference in whether an LLM was being used for clinical and/or nonclinical work, the endorsed current use cases, expressed concerns regarding LLM use in health care, desire for a HIPAA-compliant LLM, or endorsed planned uses for a HIPAA-compliant LLM.


Principal Findings

This study demonstrates that pediatric providers are already using LLMs in both their clinical and nonclinical work. Because of the known limitations of LLMs in a clinical setting, including demonstrated low diagnostic accuracy, it is important to know how pediatricians are using these tools. This study adds to the current literature by providing granular information about how people are currently using LLMs in their work as well as detailing ways that providers envision using a HIPAA-compliant future version of ChatGPT.

Nearly all survey respondents had heard of ChatGPT, and most had used this tool. While nearly 75% of LLM users indicated that they are already using ChatGPT in their nonclinical work, only slightly over 25% indicated that they are currently using an LLM for clinical work. Moreover, the most common clinical uses reported were for administrative tasks such as drafting letters, prior authorizations, and patient education materials.

Similar to other early studies, we found that clinicians are enthusiastic about using LLMs (specifically, a HIPAA-compliant LLM) in clinical and nonclinical work. It is notable that almost one-third of pediatric providers in this study indicated that they would feel comfortable using LLMs for patient care in the current format. Additionally, almost three-quarters of respondents indicated that they would use a HIPAA-compliant version of ChatGPT if one were available. If a HIPAA-compliant LLM were available, participants described a variety of ways that they would use it. Most of the envisioned use cases for the HIPAA-compliant LLM were still administrative or operational; however, other common uses included drafting all or part of a clinical note as well as a variety of uses related to clinical decision support and clinical documentation.

Since we have demonstrated that LLMs are already being used clinically and that there is strong interest in further future use of LLMs, it is imperative that health care systems and government agencies create thoughtful policies and regulations for LLM use in health care. The Biden administration recently announced an executive order that directs actions to maximize the promise and manage the risks of AI [24]. Clinical informaticians are needed to help navigate the thoughtful implementation of AI tools into clinical care.

Limitations and Future Work

This study has limitations. Most notably, the generalizability is limited by the low response rate and pediatric provider population. Another limitation is selection bias, as providers using LLMs may be more interested in completing a survey related to this topic. Similarly, although recent American Board of Pediatric statistics show that 67% of pediatricians are female and 57% are White [25], our survey respondents were even more skewed toward these populations. Self-reported data may contain some social desirability bias as respondents may attempt to demonstrate that they are using these technologies in acceptable ways. Also of note, we did not ask if respondents worked in an inpatient, outpatient, or other setting; not having that information limits some interpretation of the response data. For example, the number of providers interested in using LLMs to write discharge summaries is less interpretable as these documents are not generally written by outpatient providers.

The enthusiasm we found for the future use of LLMs lends itself to further investigation of LLM use in health care. We propose evaluating the differences in use cases by clinical work setting such as the emergency department versus inpatient versus outpatient versus proceduralists. There would be value in determining if survey results would differ across practice settings such as nonacademic, nonurban, adult patients, and different patient resources, or by geographic location; thus, we propose a larger study across institutions in the future. In-depth interviews and other qualitative methods could be used to gain deeper insights into providers’ LLM use and beliefs. Finally, exploring patients’ perceptions and current use of LLMs would be of great value.

Conclusions

This survey study adds to the corpus of knowledge on how providers are thinking about and using LLMs in a clinical context. LLMs will add to the series of digital tools in the clinical ecosystem meant to help advance clinical care. Despite significant concerns and barriers to LLM use in health care, this survey demonstrates that these tools are already commonly used and there is enthusiasm for their future use. Knowing how providers are using LLMs in their clinical and nonclinical work will help guide policy and regulations regarding the health care use of AI. As informaticians, it is incumbent upon us to support the appropriate use of these technologies to improve patient care, while also monitoring for their unintended consequences.

Data Availability

The datasets generated and analyzed during this study are available from the corresponding author on reasonable request.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Survey questions.

DOCX File , 25 KB

Multimedia Appendix 2

ChatGPT survey recruitment email.

DOCX File , 15 KB

  1. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, et al. Language models are few-shot learners. 2020. Presented at: 34th Conference on Neural Information Processing Systems (NeurIPS 2020); December 6-12, 2020; Vancouver, Canada. URL: https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
  2. Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, et al. Large language models encode clinical knowledge. Nature. Aug 12, 2023;620(7972):172-180. [FREE Full text] [CrossRef] [Medline]
  3. Ali SR, Dobbs TD, Hutchings HA, Whitaker IS. Using ChatGPT to write patient clinic letters. Lancet Digit Health. Apr 2023;5(4):e179-e181. [FREE Full text] [CrossRef] [Medline]
  4. Lee P, Bubeck S, Petro J. Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. N Engl J Med. Mar 30, 2023;388(13):1233-1239. [CrossRef] [Medline]
  5. Goodman RS, Patrinely JR, Stone CA, Zimmerman E, Donald RR, Chang SS, et al. Accuracy and reliability of chatbot responses to physician questions. JAMA Netw Open. Oct 02, 2023;6(10):e2336483. [FREE Full text] [CrossRef] [Medline]
  6. Zhang P, Kamel Boulos MN. Generative AI in medicine and healthcare: promises, opportunities and challenges. Future Internet. Aug 24, 2023;15(9):286. [CrossRef]
  7. Rabbani N, Brown C, Bedgood M, Goldstein RL, Carlson JL, Pageler NM, et al. Evaluation of a large language model to identify confidential content in adolescent encounter notes. JAMA Pediatr. Mar 01, 2024;178(3):308-310. [CrossRef] [Medline]
  8. Snow J. ChatGPT can give great answers. But only if you know how to ask the right question. Wall Street Journal. Apr 12, 2023. URL: https://www.wsj.com/articles/chatgpt-ask-the-right-question-12d0f035 [accessed 2023-11-19]
  9. Strong E, DiGiammarino A, Weng Y, Kumar A, Hosamani P, Hom J, et al. Chatbot vs medical student performance on free-response clinical reasoning examinations. JAMA Intern Med. Sep 01, 2023;183(9):1028-1030. [FREE Full text] [CrossRef] [Medline]
  10. Koopman B, Zuccon G. ChatGPT tell me what I want to hear: How different prompts impact health answer correctness. In: Bouamor H, Pino J, Bali K, editors. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Singapore. Association for Computational Linguistics; 2023:15012-15022.
  11. Barile J, Margolis A, Cason G, Kim R, Kalash S, Tchaconas A, et al. Diagnostic accuracy of a large language model in pediatric case studies. JAMA Pediatr. Mar 01, 2024;178(3):313-315. [CrossRef] [Medline]
  12. Initial guidelines for using ChatGPT and other generative AI tools at Harvard. Harvard University Information Technology. URL: https://huit.harvard.edu/news/ai-guidelines [accessed 2023-11-19]
  13. Ayoub NF, Lee Y, Grimm D, Divi V. Head-to-head comparison of ChatGPT versus Google Search for medical knowledge acquisition. Otolaryngol Head Neck Surg. Jun 2024;170(6):1484-1491. [CrossRef] [Medline]
  14. Temsah M, Aljamaan F, Malki KH, Alhasan K, Altamimi I, Aljarbou R, et al. ChatGPT and the future of digital health: a study on healthcare workers' perceptions and expectations. Healthcare. Jun 21, 2023;11(13):1812. [FREE Full text] [CrossRef] [Medline]
  15. Parikh PM, Talwar V, Goyal M. ChatGPT: An online cross-sectional descriptive survey comparing perceptions of healthcare workers to those of other professionals. Cancer Res Stat Treat. 2023;6(1):32-36. [CrossRef]
  16. Iyengar KP, Yousef MMA, Nune A, Sharma GK, Botchu R. Perception of Chat Generative Pre-trained Transformer (Chat-GPT) AI tool amongst MSK clinicians. J Clin Orthop Trauma. Sep 2023;44:102253. [CrossRef] [Medline]
  17. Hu J, Liu F, Chu CM, Chang YT. Health care trainees' and professionals' perceptions of ChatGPT in improving medical knowledge training: rapid survey study. J Med Internet Res. Oct 18, 2023;25:e49385. [FREE Full text] [CrossRef] [Medline]
  18. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. Apr 2009;42(2):377-381. [FREE Full text] [CrossRef] [Medline]
  19. Harris PA, Taylor R, Minor BL, Elliott V, Fernandez M, O'Neal L, et al. REDCap Consortium. The REDCap consortium: building an international community of software platform partners. J Biomed Inform. Jul 2019;95:103208. [FREE Full text] [CrossRef] [Medline]
  20. Miles M, Huberman M, Saldana J. Qualitative Data Analysis: A Methods Sourcebook. Thousand Oaks, CA. Sage Publications; 2018.
  21. Ancker J, Benda N, Reddy M, Unertl KM, Veinot T. Guidance for publishing qualitative research in informatics. J Am Med Inform Assoc. Nov 25, 2021;28(12):2743-2748. [FREE Full text] [CrossRef] [Medline]
  22. Cofie N, Braund H, Dalgarno N. Eight ways to get a grip on intercoder reliability using qualitative-based measures. Can Med Educ J. May 29, 2022;13(2):73-76. [FREE Full text] [CrossRef] [Medline]
  23. Health information privacy. US Department of Health and Human Services Office for Civil Rights. URL: https://www.hhs.gov/hipaa/index.html [accessed 2024-04-19]
  24. Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. The White House. Oct 30, 2023. URL: https:/​/www.​whitehouse.gov/​briefing-room/​presidential-actions/​2023/​10/​30/​executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/​ [accessed 2023-12-05]
  25. Turner A, Gregg C, Leslie LK. Race and ethnicity of pediatric trainees and the board-certified pediatric workforce. Pediatrics. Sep 01, 2022;150(3):e2021056084. [CrossRef] [Medline]


AI: artificial intelligence
GPT LLM: generative pretrained transformer large language model
HIPAA: Health Insurance Portability and Accountability Act
LLM: large language model
REDCap: Research Electronic Data Capture


Edited by A Mavragani; submitted 28.01.24; peer-reviewed by E Koski, DØ Madsen, R Zhang, GK Gupta, S Rajput, L Zhu; comments to author 18.04.24; revised version received 10.05.24; accepted 14.07.24; published 12.09.24.

Copyright

©Susannah Kisvarday, Adam Yan, Julia Yarahuan, Daniel J Kats, Mondira Ray, Eugene Kim, Peter Hong, Jacob Spector, Jonathan Bickel, Chase Parsons, Naveed Rabbani, Jonathan D Hron. Originally published in JMIR Formative Research (https://formative.jmir.org), 12.09.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.