Search Articles

View query in Help articles search

Search Results (1 to 10 of 61 Results)

Download search results: CSV END BibTex RIS

CSV download: Download all 61 search results (up to 5,000 articles maximum)

Assessing the Readability and Quality of Web-Based Resources on Exercise Stress Testing: Cross-Sectional Readability and Quality Analysis

Assessing the Readability and Quality of Web-Based Resources on Exercise Stress Testing: Cross-Sectional Readability and Quality Analysis

Since then, the readability and quality of online health content accessed by patients have remained a key concern for clinicians, leading to numerous studies on the topic [7-10]. These studies examine various factors, but one of the most consistent is readability—specifically, the grade level at which the material is written.

Munir Rahbe, Dhrumi Mistry, Ali M Khawaja, Natalie A Sous, Alan Y Tso

JMIR Form Res 2025;9:e68000


Comparison of Japanese Mpox (Monkeypox) Health Education Materials and Texts Created by Artificial Intelligence: Cross-Sectional Quantitative Content Analysis Study

Comparison of Japanese Mpox (Monkeypox) Health Education Materials and Texts Created by Artificial Intelligence: Cross-Sectional Quantitative Content Analysis Study

Recent studies have shown that AI-generated texts can simplify complex medical information and improve readability, outperforming traditional web-based materials [14-16]. By enhancing clarity and accessibility, AI-generated texts have the potential to reduce health disparities and ensure equitable access to vital health information, particularly during public health crises such as the mpox outbreak.

Shinya Ito, Emi Furukawa, Tsuyoshi Okuhara, Hiroko Okada, Takahiro Kiuchi

JMIR AI 2025;4:e70604


Readability of Online Patient Educational Materials for Rosacea: Systematic Web Search and Analysis

Readability of Online Patient Educational Materials for Rosacea: Systematic Web Search and Analysis

Adhering to these readability standards is essential for improving patient comprehension and ensuring that dermatological resources are accessible to a wider audience. In this study, we assess the readability of available online educational material dedicated to the management and treatment of rosacea. A web search was conducted on April 25, 2025, using Google with the query “Rosacea Patient Information.”

Derek Nguyen, Jennifer Javaheri, Daniel Nguyen, Vy Han

JMIR Form Res 2025;9:e67916


Using Artificial Intelligence ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study

Using Artificial Intelligence ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study

We performed an in-depth linguistic analysis to evaluate the readability and complexity of the AI-generated responses. To achieve this, 2 main readability scores, the Flesch Reading Ease Score (FRES) and the Flesch–Kincaid Grade Level (FKGL), were computed using standard formulas. FRES evaluates text complexity on a scale from 0 to 100, where lower scores suggest more difficult reading. FKGL approximates the US education grade level required to comprehend the text.

Layan Yousef Alharbi, Rema Rashed Alrashoud, Bader Shabib Alotaibi, Abdulaziz Meshal Al Dera, Raghad Saleh Alajlan, Reem Rashed AlHuthail, Dalal Ibrahim Alessa

JMIR Form Res 2025;9:e73642


Evaluating the Readability of Pediatric Neurocutaneous Syndromes–Related Patient Education Material Created by a Custom GPT With Retrieval Augmentation

Evaluating the Readability of Pediatric Neurocutaneous Syndromes–Related Patient Education Material Created by a Custom GPT With Retrieval Augmentation

Readability was assessed by averaging the following eight readability formulas: Automated Readability Index, Flesch Reading Ease Formula, Gunning Fog Index, Flesch-Kincaid Grade Level Formula, Coleman-Liau Index, SMOG (Simple Measure of Gobbledygook) Index, Linsear Write Formula, and FORCAST Readability Formula (Multimedia Appendix 2) [8]. Two-tailed t tests and an ANOVA were used for comparison.

Nneka Ede, Robyn Okereke

JMIR Dermatol 2025;8:e59054


Improving the Readability of Institutional Heart Failure–Related Patient Education Materials Using GPT-4: Observational Study

Improving the Readability of Institutional Heart Failure–Related Patient Education Materials Using GPT-4: Observational Study

As prior data regarding accuracy have been promising, an emerging focus has been on investigating the readability of the model’s output. Prior studies have shown Chat GPT provides accurate and comprehensive responses to questions related to heart failure, and another demonstrated its responses were at a college reading level, highlighting the need for further assessment of the readability of GPT’s outputs [12,18].

Ryan C King, Jamil S Samaan, Joseph Haquang, Vishnu Bharani, Samuel Margolis, Nitin Srinivasan, Yuxin Peng, Yee Hui Yeo, Roxana Ghashghaei

JMIR Cardio 2025;9:e68817


Large Language Model–Assisted Surgical Consent Forms in Non-English Language: Content Analysis and Readability Evaluation

Large Language Model–Assisted Surgical Consent Forms in Non-English Language: Content Analysis and Readability Evaluation

The AI-generated consent forms were directly compared with the original version for readability and content quality. To assess the impact of LLM-based simplification, we conducted readability and structural comparisons between the original (before) and AI-edited consent (after) forms. Readability was quantitatively measured using KRead and Natmal—the 2 established readability indices for the Korean language [10,11].

Namkee Oh, Jongman Kim, Sunghae Park, Sunghyo An, Eunjin Lee, Hayeon Do, Jiyoung Baik, Suk Min Gwon, Jinsoo Rhu, Gyu-Seong Choi, Seonmin Park, Jai Young Cho, Hae Won Lee, Boram Lee, Eun Sung Jeong, Jeong-Moo Lee, YoungRok Choi, Jieun Kwon, Kyeong Deok Kim, Seok-Hwan Kim, Gwang-Sik Chun

J Med Internet Res 2025;27:e73222


Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study

Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study

While improvements in readability scores have been made over time [17] and human intervention can assist in improving readability [19], challenges remain in bringing readability to an appropriate level on a mass scale. Large language models (LLMs) can consume massive amounts of data and perform a wide array of tasks that have made them useful tools to understand and generate text [26], making them an enticing option to improve the readability of PEMs.

John Will, Mahin Gupta, Jonah Zaretsky, Aliesha Dowlath, Paul Testa, Jonah Feldman

J Med Internet Res 2025;27:e69955


Assessing the Quality and Reliability of ChatGPT’s Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4

Assessing the Quality and Reliability of ChatGPT’s Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4

To evaluate readability, responses from both versions were assessed using a web-based Flesch Reading Ease Score (FRES) calculator. This calculator determined the responses’ readability using two different indices: the FRES and the Flesch-Kincaid Grade Level (FKGL). These readability tests use mathematical formulas that consider factors such as sentence length and word count.

Ana Grilo, Catarina Marques, Maria Corte-Real, Elisabete Carolino, Marco Caetano

JMIR Cancer 2025;11:e63677


Appropriateness of Web-Based Resources for Home Blood Pressure Measurement and Their Alignment With Guideline Recommendations, Readability, and End User Involvement: Environmental Scan of Web-Based Resources

Appropriateness of Web-Based Resources for Home Blood Pressure Measurement and Their Alignment With Guideline Recommendations, Readability, and End User Involvement: Environmental Scan of Web-Based Resources

However, previous research has shown that web-based educational resources for cardiovascular disease risk management do not provide appropriate information or meet the usability or readability needs of adults [18,25-29], and co-design involving end users (such as community members and medical professionals) is an underused method during resource development [24].

Eleanor Clapham, Dean Picone, Samuel Carmichael, Carissa Bonner, Niamh Chapman

JMIR Infodemiology 2025;5:e55248