Published on in Vol 7 (2023)

Preprints (earlier versions) of this paper are available at, first published .
Patient Acceptability of Symptom Screening and Patient Education Using a Chatbot for Autoimmune Inflammatory Diseases: Survey Study

Patient Acceptability of Symptom Screening and Patient Education Using a Chatbot for Autoimmune Inflammatory Diseases: Survey Study

Patient Acceptability of Symptom Screening and Patient Education Using a Chatbot for Autoimmune Inflammatory Diseases: Survey Study

Original Paper

1Department of Rheumatology and Immunology, Singapore General Hospital, Singapore, Singapore

2Medicine Academic Clinical Programme, SingHealth-Duke-NUS, Singapore, Singapore

3Department of General Medicine, Sengkang General Hospital, Singapore, Singapore

4Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore, Singapore

5Internal Medicine Clinic, Changi General Hospital, Singapore, Singapore

6Division of Rheumatology and Immunology, Department of Medicine, Changi General Hospital, Singapore, Singapore

*these authors contributed equally

Corresponding Author:

Anindita Santosa, MBBS

Division of Rheumatology and Immunology, Department of Medicine

Changi General Hospital

2 Simei Street 3

Level 6, Medical Centre

Singapore, 529889


Phone: 65 90128379


Background: Chatbots have the potential to enhance health care interaction, satisfaction, and service delivery. However, data regarding their acceptance across diverse patient populations are limited. In-depth studies on the reception of chatbots by patients with chronic autoimmune inflammatory diseases are lacking, although such studies are vital for facilitating the effective integration of chatbots in rheumatology care.

Objective: We aim to assess patient perceptions and acceptance of a chatbot designed for autoimmune inflammatory rheumatic diseases (AIIRDs).

Methods: We administered a comprehensive survey in an outpatient setting at a top-tier rheumatology referral center. The target cohort included patients who interacted with a chatbot explicitly tailored to facilitate diagnosis and obtain information on AIIRDs. Following the RE-AIM (Reach, Effectiveness, Adoption, Implementation and Maintenance) framework, the survey was designed to gauge the effectiveness, user acceptability, and implementation of the chatbot.

Results: Between June and October 2022, we received survey responses from 200 patients, with an equal number of 100 initial consultations and 100 follow-up (FU) visits. The mean scores on a 5-point acceptability scale ranged from 4.01 (SD 0.63) to 4.41 (SD 0.54), indicating consistently high ratings across the different aspects of chatbot performance. Multivariate regression analysis indicated that having a FU visit was significantly associated with a greater willingness to reuse the chatbot for symptom determination (P=.01). Further, patients’ comfort with chatbot diagnosis increased significantly after meeting physicians (P<.001). We observed no significant differences in chatbot acceptance according to sex, education level, or diagnosis category.

Conclusions: This study underscores that chatbots tailored to AIIRDs have a favorable reception. The inclination of FU patients to engage with the chatbot signifies the possible influence of past clinical encounters and physician affirmation on its use. Although further exploration is required to refine their integration, the prevalent positive perceptions suggest that chatbots have the potential to strengthen the bridge between patients and health care providers, thus enhancing the delivery of rheumatology care to various cohorts.

JMIR Form Res 2023;7:e49239




Digital health technologies, including chatbots and conversational artificial intelligence (AI) agents, have the potential to reshape health care delivery and patient outcomes [1,2]. Chatbots use natural language processing and AI to mimic human interactions and foster efficient communication between patients and health care providers. These digital tools can potentially transform medical care by aiding in patient screening, triage, and education, while simultaneously alleviating the strain on health care systems [3].

Although chatbots can potentially transform chronic disease management, evidence of their sustained adoption remains limited, particularly for autoimmune conditions [4,5]. Currently, most chatbot applications focus on oncology and mental health, with minimal customization for inflammatory rheumatic diseases [6,7]. Early diagnosis and treatment of inflammatory rheumatic diseases are critical for preventing irreversible joint damage, disability, and other complications [8]. Unfortunately, significant barriers, including rheumatologist shortages, high costs, and low public awareness, restrict optimal rheumatic care globally. Chatbots can address these challenges by facilitating patient education, self-triage, and access to rheumatology expertise. Despite the immense potential of chatbots to improve rheumatic disease management, there is minimal research on their real-world implementation and efficacy.

To address this gap, we evaluated patients’ perceptions and acceptance of a chatbot purpose-built for rheumatology. We aimed to provide insights into the successful adoption of chatbots in rheumatology care by assessing user attitudes, satisfaction, demographics, and the differences between the first and follow-up (FU) encounters. In the following section, we review the literature on conversational AI agents in health care, focusing on autoimmune inflammatory rheumatic diseases (AIIRDs). We then present the methodology and results of a survey evaluating patients’ acceptance of an AIIRD chatbot in a rheumatology clinic. Finally, we discuss the implications of our findings, limitations, and the future research needed to realize the potential benefits of thoughtfully designed chatbots for improving outcomes in autoimmune diseases.

Literature Review

Conversational AI Agents in Health Care

Chatbots can facilitate early disease detection and streamline referral processes, ultimately contributing to better patient outcomes [9,10]. They show considerable promise in increasing efficiency, reducing costs, and promoting patient satisfaction in various health care settings [11]. Patients are likely to benefit from prompt and accurate information provided by chatbots, thereby enhancing their overall care experience [8,12]. Notable examples include OneRemission (Keenethics), which provides patients with cancer with detailed health insights; Youper (Youper Inc), a personalized conversational assistant dedicated to mental well-being support [13-15]; Babylon Health (eMed Healthcare UK), known for its symptom checking and virtual consultations; and Engati (Engati) and Inbenta (Inbenta) chatbots, which have the potential to manage a range of patient inquiries and provide essential medical information.

The usefulness of digital health care technology has been the subject of an expanding body of research, including how well it performs when used for patient screening and classification during the COVID-19 pandemic [16,17]. Existing evidence indicates that the adoption and acceptability of chatbots in health care are subject to a multitude of technological, socioeconomic, and linguistic barriers. Consequently, despite the considerable potential of chatbots, their acceptability in health care settings remains inconsistent across patient demographics [18-22]. It has been reported that subjective norms, health awareness, and perceived convenience affect attitudes toward medical chatbots [23], and chatbot designers should focus on a user-centered framework and legal implications to address patient concerns. Moreover, chatbots, similar to any digital technology, may pose data privacy concerns, particularly given the sensitive nature of health-related information, and the accuracy of chatbot responses governed by their underlying algorithms can vary [24,25]. Misinterpretation of user input or delivery of inaccurate health advice can lead to suboptimal patient care or health risks. Thus, while chatbots offer promising possibilities for enhancing health care delivery, they should be used judiciously and continually optimized to ensure safety, accuracy, and respect for user privacy.

Conversational AI Agents in AIIRDs

Preliminary evidence indicates that chatbots could play a substantial role in the screening and triaging of rheumatic diseases, emphasizing their potential contribution to this specialized area of medical practice. Unfortunately, few studies have reported results or patient perspectives on chatbots used in the context of chronic diseases [8,12]. To address the challenges in AIIRD identification and prioritization, health care systems must implement innovative strategies that improve screening and triage processes for patients with AIIRDs [26-28]. To this end, digital technologies such as telemedicine, electronic health records, and AI algorithms have been proposed to improve early diagnosis and streamline the referral process [29].

Implementing Conversational AI Agents

The ultimate efficacy of chatbot implementation depends on user acceptability and perceived value [9]. Implementation science is the study of methods and strategies to promote the systematic acceptance of research findings in routine clinical practice to improve the quality and effectiveness of health care. It is grounded in several theoretical frameworks, including the Consolidated Framework for Implementation Research and the RE-AIM (Reach, Effectiveness, Adoption, Implementation and Maintenance) framework [30,31]. These frameworks provide a comprehensive approach for studying the implementation of health care interventions and can help identify the factors that influence adoption and implementation.



AIIRDs include rheumatoid arthritis, psoriatic arthritis, ankylosing spondylitis, and systemic lupus erythematosus. They are characterized by aberrations in the immune system that lead to chronic inflammation in various parts of the body, particularly the joints.


A chatbot is software designed to interact with humans in their natural language. These interactions usually occur through messaging applications, websites, mobile apps, or over the phone. In health care, chatbots can provide valuable assistance by answering patient queries, helping with diagnoses, or providing medical information.

RE-AIM Framework

The RE-AIM framework serves as a robust model for evaluating the impact of health interventions encompassing 5 pivotal dimensions: reach, effectiveness, adoption, implementation, and maintenance [32,33]. In our study, we used this framework to design a survey to meticulously assess the effectiveness, acceptability, and implementation processes of chatbots in health care settings. Specifically, the survey, detailed in Table 1, was constructed to scrutinize the various facets of participants’ interactions with the chatbot, including its effectiveness, user-friendliness, and potential applicability. The survey questions were structured to resonate with the dimensions of the RE-AIM framework, establishing a coherent link between the survey instrument and the framework, as elucidated below.

Table 1. Chatbot acceptance survey.
NumberQuestionResponse options
Q1Was the chatbot able to answer your question satisfactorily?1 strongly disagree to 5 strongly agree
Q2Was the chatbot easy to navigate?1 strongly disagree to 5 strongly agree
Q3Was the information presented easy to understand?1 strongly disagree to 5 strongly agree
Q4I am comfortable with the use of chatbots to look up information before my consultation1 strongly disagree to 5 strongly agree
Q5I am comfortable with the use of chatbots to look up information after my consultation1 strongly disagree to 5 strongly agree
Q6I will use the chatbot again if it allows me to find the cause of my symptoms and how to manage them1 strongly disagree to 5 strongly agree
Q7 BCaI am comfortable with the idea of a chatbot making a diagnosis based on your symptoms (before consultation)1 strongly disagree to 5 strongly agree
Q7 ACbI am comfortable with the idea of a chatbot making a diagnosis based on your symptoms (after consultation)1 strongly disagree to 5 strongly agree

aBC: before the consultation.

bAC: after the consultation.

Effectiveness Dimension

Q1 evaluated the chatbot’s prowess in responding satisfactorily to user inquiries, thereby gauging its effectiveness in addressing diverse concerns. Q3 assessed the comprehensibility of the information rendered by the chatbot to ensure effective interpretation and application in health care decisions. Q5 probed participants’ comfort in using chatbots for information retrieval after consultation, shedding light on their supplementary informational value. The categorization of rheumatological diagnoses into 8 groups measured the chatbot’s impact on disease screening and triaging, aligned with the effectiveness dimension.

Adoption Dimension

Q2 measured the ease of navigation within the chatbot, which is a critical factor that influences user satisfaction and chatbot adoption feasibility. Q6 gauged participants’ willingness to engage with the chatbot in the future, especially if it aids in symptom identification and offers effective management guidance.

Reach Dimension

Q4 investigated participants’ comfort in using chatbots for pertinent information retrieval before the consultation, emphasizing chatbots’ potential in the preconsultation phase.

Implementation Dimension

Q7, posed before the consultation (BC) and after the consultation (AC), explored participants’ comfort levels regarding chatbots making diagnoses based on their symptoms, unveiling any attitudinal shifts toward the chatbots’ diagnostic role after the consultation.

Participants and Study Design

A pilot study was conducted from June to October 2022, with a predetermined sample size of 200 participants, which included 100 new cases and 100 FU cases. The sample size was determined based on previous studies [34,35]. Upon registration in the rheumatology outpatient clinic, a random sample of patients aged ≥17 years with a diagnosis of AIIRD or a referral for suspected AIIRD was selected. The participants were invited to participate in the survey, and verbal consent was obtained before administering the survey questions. Patients diagnosed by rheumatologists were classified into 3 groups: arthropathies, connective tissue diseases or vasculitides, and other diagnoses. Patient visits were classified into 2 types: first visit (FV), indicating the initial clinic visit, and FU visit, indicating return visits after the initial visit. The patient education level was classified as below university education, university degree, or higher.

Chatbot: SingHealth RheumConnect

SingHealth RheumConnect is a rule-based chatbot designed for internet-enabled devices. It was introduced in 2022 and serves as a valuable web-based resource for patients seeking treatment for rheumatology. The chatbot aims to alleviate resource and service deficiencies in rheumatology care by facilitating referrals and aiding patients in symptom management. Figure 1 and Figures S1-S7 in Multimedia Appendix 1 show the representative screenshots of the interface. SingHealth RheumConnect offers comprehensive information regarding various rheumatic diseases and arthritic conditions. The conditions covered were gout, systemic lupus erythematosus (lupus), myositis, osteoarthritis, psoriatic arthritis, rheumatoid arthritis, spondyloarthritis, systemic sclerosis, and vasculitides. This platform serves patients, caregivers, and the public. SingHealth RheumConnect also features an intuitive symptom checker to assess whether symptoms could indicate an autoimmune disease. It also connects users to essential services such as prescription renewals and appointment modifications. The chatbot is now being incorporated for normal use and is available 24/7 at no cost. Personal information is not collected during the advice-seeking interactions.

Figure 1. Representative screenshots of the chatbot interface.

Statistical Analysis

Statistical analyses were performed using Excel (Microsoft Excel 365, Microsoft Corporation) and SPSS software (version 28.0, IBM SPSS Statistics for Windows). Independent sample 2-tailed t tests or 1-way ANOVA were used to compare continuous variables, such as age and questionnaire scores. Chi-square tests were used to compare categorical variables such as sex, education, and diagnosis categories between the groups. A paired t test was conducted to compare the attitudes of the participants regarding the acceptability of chatbot diagnoses before and after clinical consultation. Specifically, this analysis compared the mean scores on the questionnaire items Q7 BC and Q7 AC. Q7 asked about comfort with the idea of a chatbot making a diagnosis based on the reported symptoms.

Linear regression analysis was performed to assess the factors associated with the Q6 score, which showed a significant difference between the FV and FU visit. Q6 asked about the intent to use the chatbot again if it helped determine the causes of the symptoms. The predictors entered into the regression model included sex (male or female), age (continuous), type of visit (FV vs FU), education (below university vs university degree or above), and diagnostic category (arthropathies vs connective tissue diseases or vasculitides vs other diagnoses). Q6 score was treated as a continuous dependent variable. Univariate linear regression was performed for the first time to examine the unadjusted associations between each predictor and Q6 score. Variables with a P value <.25 were selected for the final multivariate model entry. Multivariate linear regression using the enter method was conducted to evaluate the independent predictors of the Q6 score after adjusting for other variables in the model. Beta coefficients with 95% CIs were estimated. The variance inflation factor was examined to assess multicollinearity between predictors. Statistical significance was set at P<.05.

Ethical Considerations

No written consent or ethical approval was obtained because the collected information did not include any personal data, thus not allowing retrospective identification of the survey participants.

This study included 200 participants with the same number of FV and FU cases (Table 2). Female participants predominated the sample (n=121, 60.5%), and most of the participants (n=120, 60%) were university educated. A comparison between the FV and FU groups revealed notable differences in the patient characteristics. The FV group had a lower mean age (42.52, SD 13.80 y) than the FU group (54.35, SD 23.04 y), resulting in an overall mean age of 48.44 (SD 19.85) years for the entire sample (P<.001). However, the FV and FU groups did not show significant differences in the education level or diagnostic category (both P>.05).

Figure 2 shows that no participant selected the “strongly disagree” or “disagree” option. In the FV group, out of 100 participants, 21 (21%) remained neutral, with a response of “neither agree nor disagree.” Similar sentiments were found among 18 (18%) out of 100 participants in the FU group. Interestingly, only 2 (2%) out of 100 participants from the FV group expressed concerns about the clarity of the information in Q3, whereas the FU group had no such reservations. Figure 3 further illustrates the widespread acceptance of chatbots across the various diagnostic categories.

When evaluating the efficacy of the chatbot in addressing user queries (Q1), both the FV and FU groups showed a consistent mean score of 4.01 (SD 0.63), as shown in Table 2. The navigability of the chatbot (Q2) also received positive feedback, with an average score of 4.09 (SD 0.69) in both groups. The participants were confident in the clarity of the chatbot information, as indicated by Q3, which showed mean scores of 4.16 (SD 0.79) and 4.23 (SD 0.62) for the FV and FU groups, respectively, culminating in an overall mean of 4.20 (SD 0.71).

The comfort of using the chatbot before and after the consultations, as inferred from Q4 and Q5, exhibited consistent means across both groups. Specifically, for the FV and FU groups, Q4 recorded means of 4.00 (SD 0.67) and 4.02 (SD 0.67), respectively, and Q5 achieved means of 4.42 (SD 0.57) and 4.39 (SD 0.51), respectively, reflecting a high degree of comfort among participants. However, responses to Q6, which delved into the chatbot’s ability to discern symptom causes, showed a more pronounced mean score among FU participants at 4.29 (SD 0.57) than the FV participants who scored it at 4.05 (SD 0.66). The overall mean score for this question was 4.17 (SD 0.63).

The participants’ responses indicated different feelings about the chatbot making a diagnosis before and after consultation with the physician. Specifically, when asked if they would be comfortable with the chatbot making a diagnosis before meeting the physician (Q7 BC), participants were neutral, with mean scores of 3.21 (SD 0.59; FV group) and 3.10 (SD 0.48; FU group) on a 5-point scale. However, participants reported feeling significantly more comfortable having the chatbot make a diagnosis after meeting with the physician (Q7 AC; P=.04), with higher mean scores of 4.08 (SD 0.58; FV group) and 3.91 (SD 0.59; FU group). A paired t test confirmed that participants were significantly more at ease with the chatbot making a diagnosis before the consultation than after the consultation (P<.001).

The responses to the questionnaire did not differ significantly by sex, education level, or diagnosis category, as shown in Tables S1-S3 in Multimedia Appendix 1. However, as shown in Table 2, a noticeable difference was observed in the responses to questionnaire item Q6 between the FV and FU groups (P=.006). Multivariate linear regression was used to identify the factors that influenced this difference. The variables, resulting regression coefficients, and 95% CI are presented in Tables 3 and 4. Sex did not correlate significantly with responses to Q6; male participants had a coefficient of 0.066 (95% CI –0.115 to 0.247; P=.47) compared to female participants. Likewise, neither age nor education level was a significant predictor of the Q6 responses. In particular, the FU visit type demonstrated a strong link with a higher Q6 response rate, as evidenced by the coefficient of 0.241 (95% CI 0.059 to 0.423; P=.01). However, no significant association was identified between the diagnostic categories and Q6 response (both P>.05). When evaluating the multicollinearity model, the variance inflation factors ranged from 1.01 to 1.29, with an average of 1.15, confirming that it was not a factor of concern.

Table 2. Characteristics of FVa and FUb patients.
CharacteristicsFV (n=100)FU (n=100)Total (N=200)P valuec
Age (y), mean (SD)42.52 (13.80)54.35 (23.04)48.44 (19.85)<.001
Questionnaire resultsd, mean (SD)

Q14.01 (0.66)4.01 (0.61)4.01 (0.63)>.99

Q24.08 (0.66)4.11 (0.72)4.09 (0.69).71

Q34.16 (0.79)4.23 (0.62)4.20 (0.71).49

Q44.00 (0.67)4.02 (0.67)4.01 (0.66).83

Q54.42 (0.57)4.39 (0.51)4.41 (0.54).70

Q64.05 (0.66)4.29 (0.57)4.17 (0.63).006

Q7 BCe3.21 (0.59)3.10 (0.48)3.15 (0.54).15

Q7 ACf4.08 (0.58)3.91 (0.59)4.00 (0.59).04
Sex, n (%).89

Female61 (61)60 (60)121 (60.5)

Male39 (39)40 (40)79 (39.5)
Education, n (%).25

Below university36 (36)44 (44)80 (40)

University and above64 (64)56 (56)120 (60)
Diagnostic categoryg, n (%).38

Arthropathies67 (68.4)77 (77)144 (72.7)

CTDh or vasculitides20 (20.4)16 (16)36 (18.2)

Other11 (11.2)7 (7)18 (9.1)

aFV: first visit.

bFU: follow-up.

cP values from t tests (continuous variables) and chi-squared tests (categorical variables).

dQ1-Q7 refer to the questionnaire items that assess patient attitudes.

eBC: before the consultation.

fAC: after the consultation.

gThe diagnosis categories assigned by rheumatologists.

hCTD: connective tissue diseases.

Figure 2. Patient responses to the SingHealth RheumConnect chatbot interaction: (A) FV and (B) FU visits. The x-axis illustrates the range of questions, while the y-axis indicates the percentage of responses across different rating scales. The evaluation of chatbot experience was derived from the questions listed in Table 1 and discussed in the Methods section. Most FV and FU patients agreed that the chatbot provided satisfactory responses, displayed user-friendliness, and conveyed information in a comprehensible manner. AC: after the consultation; BC: before the consultation; FU: follow-up; FV: first visit; Q1-Q7: questionnaire items that assess patient attitudes.
Figure 3. The outcome of the SingHealth RheumConnect chatbot acceptability survey for patients seeking information on different diseases, including (A) crystal arthropathy (n=48), (B) osteoarthritis (n=10), (C) psoriatic arthritis (n=18), (D) rheumatoid arthritis (n=44), (E) spondyloarthropathies (n=10), (F) SLE (n=28), (G) other CTD and vasculitides (including overlap syndromes; n=28), and (H) others (n=14). The y-axis represents the type of rheumatological disease, while the x-axis indicates the percentage of responses across different rating scales for chatbot acceptability. AC: after the consultation; BC: before the consultation; CTD: connective tissue disease; Q1-Q7: questionnaire items that assess patient attitudes; SLE: systemic lupus erythematosus.
Table 3. Variables and values used in the regression.
Sex0=female, 1=male
Age (y)Primary value
Visit0=FVa, 1=FUb visit
Education0=below university, 1=university and above
Diagnostic category0=arthropathies, 1=connective tissue diseases and vasculitides, 2=others

aFV: first visit.

bFU: follow-up.

Table 4. Multivariate linear regression analysis for Q6a.
VariableCoefficient (95% CI)P value
Sex0.066 (–0.115 to 0.247).47
Age (y)0.000 (–0.004 to 0.005).86
Visit0.241 (0.059 to 0.423).01
Education0.084 (–0.098 to 0.266).36
Diagnostic category

Connective tissue diseases and vasculitides0.097 (–0.138 to 0.333).42

Others0.162 (–0.148 to 0.472).31

aQ6: question 6 in Table 1.

Principal Findings

In a cross-sectional study framed by the RE-AIM framework, we assessed the user satisfaction and acceptability of a health care chatbot among 200 patients, exploring potential variations across demographic strata, such as age, sex, and education. Our cohort, comprising 60.5% female participants, mirrored the demographics in Singapore [36]. Overall, our analysis revealed a favorable reception for the chatbot, as reflected by mean scores nearing 4 on a 5-point scale for most questionnaire items related to information retrieval and diagnosis via the chatbot.

Effect of Interaction With the Physician

We also aimed to elucidate how FV and FU patients, representing various stages of treatment, engaged in and viewed the chatbot. The results revealed key differences between the groups. Specifically, FU patients reported a significantly higher willingness to reuse the chatbot for symptom investigation than FV patients. This suggests that FU patients may become more receptive to chatbots for symptom analysis after gaining familiarity with their clinical context through ongoing care. Further, the survey indicated a favorable shift in patients’ attitudes toward the acceptability of chatbot-based diagnosis after interacting with physicians. Patients expressed greater openness to the chatbot and made a diagnosis based on their symptoms after the clinical encounter. This highlights that physician endorsements may help mitigate the initial concerns patients have regarding the chatbots’ diagnostic accuracy. These findings align with prior studies demonstrating that health chatbot acceptability can be affected by factors such as perceived accuracy of diagnosis, inability to conduct physical examinations, and patient preference for communication with physicians over chatbots [37,38]. Our observations imply that establishing patient confidence in the legitimacy of chatbots as diagnostic decision-making aids relies heavily on securing ongoing support and reassurance from health care providers as part of the care team [11].

Effect of Age and Sex

We found that patients in all age groups had a similar acceptance of using a chatbot to search for information and as a diagnostic tool. Similarly, sex did not significantly influence the chatbot’s acceptability. These results suggest that chatbots could benefit patients at different stages of treatment as both a source of information and a diagnostic tool. Our findings align with those of a recent study by Iancu and Iancu [23], who found that age and sex did not appear to influence chatbot use, whereas subjective norms, health consciousness, and perceived convenience were influential factors. Furthermore, Chang et al [29] found that attitudes and subjective norms were positively related to individuals’ intentions to use medical chatbots. Taken together, it is evident that to maximize the adoption and use of chatbots, designers and developers should adopt user-centered approaches that address user concerns and problems [39].


Despite these insightful findings, this study had several limitations. The participant pool, mainly drawn from clinical settings, such as new referrals and FUs at rheumatology clinics, may not fully represent a broader population. The chatbot’s design, catering to Asian patients diagnosed with AIIRDs, may limit the generalizability of the results. Although structured, the RE-AIM model brought about challenges owing to its inherent intricacies. Moreover, the sampling process might have introduced a nonresponse bias, with only 200 (16.2%) of the 1232 patients providing consent to participate, which necessitates consideration. This study also overlooked the specific challenges faced by 31 participants aged >71 years. Although this group reported no explicit difficulties, the broader context of Singapore’s digital literacy programs aimed at seniors may have been a contributing factor.


For a more comprehensive understanding of patient acceptance of chatbots, future research could also delve into the socioeconomic and cultural factors that influence chatbot adoption, possibly by referencing broader digital literacy initiatives. Exploring strategies to enhance patient engagement, possibly through gamification or personalized user experiences, could also be considered in subsequent studies. Moreover, addressing challenges such as data privacy and the complexity of medical contexts is crucial for the integration of chatbots in health care. Multicenter studies with larger sample sizes are necessary to refine the accuracy and representativeness of our findings. Transitioning to machine learning–driven prediction models may address concerns regarding chatbot diagnostic accuracy [40]. Moreover, an education and training framework for health care providers encompassing informational sessions and hands-on training modules is essential for successful chatbot integration into health care practice. Finally, establishing standardized guidelines and fostering an iterative improvement process based on provider feedback can significantly contribute to the refinement and broader acceptance of chatbot interfaces in health care settings [41,42].


This study underscores the potential of chatbots, particularly those tailored for AIIRDs, to affect rheumatology care delivery positively. Our results revealed that patients who had FU visits showed a stronger propensity to re-engage with the chatbot, particularly when they were offered information on symptomatology. Sex, educational attainment, and specific diagnostic categories did not significantly influence chatbot acceptance. Furthermore, greater comfort with chatbot-mediated diagnoses was observed after physician consultations. This suggests that prior clinical experience and physician validation can significantly improve confidence in emerging digital tools. Although more rigorous investigations are warranted to fine-tune their integration into routine care, current evidence suggests that appropriately developed chatbots can fortify the nexus between patients and health care professionals, thereby improving the overall efficacy of rheumatology care.


The authors would like to acknowledge this study’s participants for putting aside their time for this study and all members of the SingHealth Department of Rheumatology and Immunology who have contributed ideas to the RheumConnect chatbot. We also thank the editors at Editverse Publication Support Services [43] for their assistance with language editing and proofreading. The research presented in this paper did not receive specific funding from the public, commercial, or nonprofit sectors.

Data Availability

The data underlying this paper are available in the paper and web-based Multimedia Appendix 1.

Authors' Contributions

TCT contributed to the conceptualization, methodology, investigation, visualization, writing of the original draft, review the writing, and editing. NEBR contributed to the methodology, investigation, writing of the original draft, review of the writing, and editing. JWL contributed to the conceptualization, review of the writing, and editing. AS contributed to the conceptualization, methodology, data curation, formal analysis, investigation, review of the writing, editing, resources, supervision, and visualization. All authors read and approved the submitted version.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Comprehensive collection of questionnaire responses segregated by sex, education level, and diagnostic categories, alongside visuals of the chatbot interface.

DOCX File , 1581 KB

  1. de Cock C, Milne-Ives M, van Velthoven MH, Alturkistani A, Lam C, Meinert E. Effectiveness of conversational agents (virtual assistants) in health care: protocol for a systematic review. JMIR Res Protoc. 2020;9(3):e16934. [FREE Full text] [CrossRef] [Medline]
  2. Fadahunsi KP, Wark PA, Mastellos N, Neves AL, Gallagher J, Majeed A, et al. Assessment of clinical information quality in digital health technologies: international eDelphi study. J Med Internet Res. 2022;24(12):e41889. [FREE Full text] [CrossRef] [Medline]
  3. Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR mHealth uHealth. 2018;6(11):e12106. [FREE Full text] [CrossRef] [Medline]
  4. Bin Sawad A, Narayan B, Alnefaie A, Maqbool A, Mckie I, Smith J, et al. A systematic review on healthcare artificial intelligent conversational agents for chronic conditions. Sensors (Basel). 2022;22(7):2625. [FREE Full text] [CrossRef] [Medline]
  5. Cavagna L, Zanframundo G, Codullo V, Pisu MG, Caporali R, Montecucco C. Telemedicine in rheumatology: a reliable approach beyond the pandemic. Rheumatology (Oxford). 2021;60(1):366-370. [FREE Full text] [CrossRef] [Medline]
  6. Hirsch MC, Ronicke S, Krusche M, Wagner AD. Rare diseases 2030: how augmented AI will support diagnosis and treatment of rare diseases in the future. Ann Rheum Dis. 2020;79(6):740-743. [FREE Full text] [CrossRef] [Medline]
  7. Weeks R, Sangha P, Cooper L, Sedoc J, White S, Gretz S, et al. Usability and credibility of a COVID-19 vaccine chatbot for young adults and health workers in the United States: formative mixed methods study. JMIR Hum Factors. 2023;10:e40533. [FREE Full text] [CrossRef] [Medline]
  8. Solomon DH, Rudin RS. Digital health technologies: opportunities and challenges in rheumatology. Nat Rev Rheumatol. 2020;16(9):525-535. [FREE Full text] [CrossRef] [Medline]
  9. Miles O, West R, Nadarzynski T. Health chatbots acceptability moderated by perceived stigma and severity: a cross-sectional survey. Digit Health. 2021;7:20552076211063012. [FREE Full text] [CrossRef] [Medline]
  10. Nadarzynski T, Miles O, Cowie A, Ridge D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: a mixed-methods study. Digit Health. 2019;5:2055207619871808. [FREE Full text] [CrossRef] [Medline]
  11. Parviainen J, Rantala J. Chatbot breakthrough in the 2020s? an ethical reflection on the trend of automated consultations in health care. Med Health Care Philos. 2022;25(1):61-71. [FREE Full text] [CrossRef] [Medline]
  12. Kothari S, Gionfrida L, Bharath AA, Abraham S. Artificial intelligence (AI) and rheumatology: a potential partnership. Rheumatology (Oxford). 2019;58(11):1894-1895. [FREE Full text] [CrossRef] [Medline]
  13. Nigam B, Mehra N, Niranjanamurthy M. Self-diagnosis in healthcare systems using AI chatbots. In: Hussain A, Tyagi G, Peng SL, editors. IoT and AI Technologies for Sustainable Living: A Practical Handbook. Boca Raton. CRC Press; Oct 18, 2022;79.
  14. Wu Y, Donovan R, Vu B, Engel F, Hemmje M, Afli H. Chatbot based behaviour analysis for obesity support platform. 2020. Presented at: Collaborative European Research Conference 2020; September 10-11, 2020;112-124; Belfast, Northern Ireland, United Kingdom. URL:
  15. Kavitha M, Roobini S, Prasanth A, Sujaritha M. Systematic view and impact of artificial intelligence in smart healthcare systems, principles, challenges and applications. In: Wasid M, Hakak S, Rasool T, Shaikh TA, editors. Machine Learning and Artificial Intelligence in Healthcare Systems: Tools and Techniques. Boca Raton, FL. CRC Press; Jan 03, 2023;25-56.
  16. So H, Szeto CC, Tam LS. Patient acceptance of using telemedicine for follow-up of lupus nephritis in the COVID-19 outbreak. Ann Rheum Dis. 2021;80(6):e97. [FREE Full text] [CrossRef] [Medline]
  17. Bos WH, van Tubergen A, Vonkeman HE. Telemedicine for patients with rheumatic and musculoskeletal diseases during the COVID-19 pandemic; a positive experience in the Netherlands. Rheumatol Int. 2021;41(3):565-573. [FREE Full text] [CrossRef] [Medline]
  18. Abd-Alrazaq AA, Alajlani M, Ali N, Denecke K, Bewick BM, Househ M. Perceptions and opinions of patients about mental health chatbots: scoping review. J Med Internet Res. 2021;23(1):e17828. [FREE Full text] [CrossRef] [Medline]
  19. Amiri P, Karahanna E. Chatbot use cases in the COVID-19 public health response. J Am Med Inform Assoc. 2022;29(5):1000-1010. [FREE Full text] [CrossRef] [Medline]
  20. Andersson G, Cuijpers P, Carlbring P, Riper H, Hedman E. Guided internet-based vs. face-to-face cognitive behavior therapy for psychiatric and somatic disorders: a systematic review and meta-analysis. World Psychiatry. 2014;13(3):288-295. [FREE Full text] [CrossRef] [Medline]
  21. Luo TC, Aguilera A, Lyles CR, Figueroa CA. Promoting physical activity through conversational agents: mixed methods systematic review. J Med Internet Res. 2021;23(9):e25486. [FREE Full text] [CrossRef] [Medline]
  22. Pernencar C, Saboia I, Dias JC. How far can conversational agents contribute to IBD patient health care—a review of the literature. Front Public Health. 2022;10:862432. [FREE Full text] [CrossRef] [Medline]
  23. Iancu I, Iancu B. Interacting with chatbots later in life: a technology acceptance perspective in COVID-19 pandemic situation. Front Psychol. 2022;13:1111003. [FREE Full text] [CrossRef] [Medline]
  24. Karra R, Lasfar A. Effect of questions misspelling on chatbot performance: a statistical study. Cham. Springer International Publishing; 2022. Presented at: ICDTA: International Conference on Digital Technologies and Applications; January 28-30, 2022;124-132; Fez, Morocco. URL: [CrossRef]
  25. Momtazmanesh S, Nowroozi A, Rezaei N. Artificial intelligence in rheumatoid arthritis: current status and future perspectives: a state-of-the-art review. Rheumatol Ther. 2022;9(5):1249-1304. [FREE Full text] [CrossRef] [Medline]
  26. van Nies JAB, Krabben A, Schoones JW, Huizinga TWJ, Kloppenburg M, van der Helm-van Mil AHM. What is the evidence for the presence of a therapeutic window of opportunity in rheumatoid arthritis? a systematic literature review. Ann Rheum Dis. 2014;73(5):861-870. [FREE Full text] [CrossRef] [Medline]
  27. Robinson PC, Brown MA. The window of opportunity: a relevant concept for axial spondyloarthritis. Arthritis Res Ther. 2014;16(3):109. [FREE Full text] [CrossRef] [Medline]
  28. Miloslavsky EM, Bolster MB. Addressing the rheumatology workforce shortage: a multifaceted approach. Semin Arthritis Rheum. 2020;50(4):791-796. [FREE Full text] [CrossRef] [Medline]
  29. Chang IC, Shih YS, Kuo KM. Why would you use medical chatbots? interview and survey. Int J Med Inform. 2022;165:104827. [FREE Full text] [CrossRef] [Medline]
  30. Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated consolidated framework for implementation research based on user feedback. Implement Sci. 2022;17(1):75. [FREE Full text] [CrossRef] [Medline]
  31. Reilly KL, Kennedy S, Porter G, Estabrooks P. Comparing, contrasting, and integrating dissemination and implementation outcomes included in the RE-AIM and implementation outcomes frameworks. Front Public Health. 2020;8:430. [FREE Full text] [CrossRef] [Medline]
  32. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322-1327. [FREE Full text] [CrossRef] [Medline]
  33. Gaglio B, Shoup JA, Glasgow RE. The RE-AIM framework: a systematic review of use over time. Am J Public Health. 2013;103(6):e38-e46. [CrossRef] [Medline]
  34. Aggarwal A, Tam CC, Wu D, Li X, Qiao S. Artificial Intelligence (AI)-based chatbots in promoting health behavioral changes: a systematic review. medRxiv. Preprint posted online July 07, 2022. [FREE Full text] [CrossRef]
  35. Peng ML, Wickersham JA, Altice FL, Shrestha R, Azwa I, Zhou X, et al. Formative evaluation of the acceptance of HIV prevention artificial intelligence chatbots by men who have sex with men in Malaysia: focus group study. JMIR Form Res. 2022;6(10):e42055. [FREE Full text] [CrossRef] [Medline]
  36. Ng X, Low AHL, Chew LC, Chong YY, Fong KY, Lui NL, et al. Disease patterns of rheumatology outpatients seen in a tertiary hospital serving a multi-ethnic, urban Asian population in Singapore. Int J Rheum Dis. Jun 2013;16(3):273-278. [CrossRef] [Medline]
  37. Morris RR, Kouddous K, Kshirsagar R, Schueller SM. Towards an artificially empathic conversational agent for mental health applications: system design and user perceptions. J Med Internet Res. 2018;20(6):e10148. [FREE Full text] [CrossRef] [Medline]
  38. Car LT, Dhinagaran DA, Kyaw BM, Kowatsch T, Joty S, Theng YL, et al. Conversational agents in health care: scoping review and conceptual analysis. J Med Internet Res. 2020;22(8):e17158. [FREE Full text] [CrossRef] [Medline]
  39. Fan X, Chao D, Zhang Z, Wang D, Li X, Tian F. Utilization of self-diagnosis health chatbots in real-world settings: case study. J Med Internet Res. 2021;23(1):e19928. [FREE Full text] [CrossRef] [Medline]
  40. Hügle M, Omoumi P, van Laar JM, Boedecker J, Hügle T. Applied machine learning and artificial intelligence in rheumatology. Rheumatol Adv Pract. 2020;4(1):rkaa005. [FREE Full text] [CrossRef] [Medline]
  41. Nagendran M, Chen Y, Lovejoy CA, Gordon AC, Komorowski M, Harvey H, et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020;368:m689. [FREE Full text] [CrossRef] [Medline]
  42. To QG, Green C, Vandelanotte C. Feasibility, usability, and effectiveness of a machine learning-based physical activity chatbot: quasi-experimental study. JMIR mHealth uHealth. 2021;9(11):e28577. [FREE Full text] [CrossRef] [Medline]
  43. Publication Support Services—Editverse. URL: [accessed 2023-01-08]

AI: artificial intelligence
AC: after the consultation
AIIRD: autoimmune inflammatory rheumatic disease
BC: before the consultation
FU: follow-up
FV: first visit
RE-AIM: Reach, Effectiveness, Adoption, Implementation and Maintenance

Edited by G Eysenbach; submitted 23.05.23; peer-reviewed by Y Cao, E Sezgin; comments to author 17.06.23; revised version received 27.08.23; accepted 05.11.23; published 28.12.23.


©Tze Chin Tan, Nur Emillia Binte Roslan, James Weiquan Li, Xinying Zou, Xiangmei Chen, , Anindita Santosa. Originally published in JMIR Formative Research (, 28.12.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.