Search Articles

View query in Help articles search

Search Results (1 to 10 of 43 Results)

Download search results: CSV END BibTex RIS


Sentiment Analysis Using a Large Language Model–Based Approach to Detect Opioids Mixed With Other Substances Via Social Media: Method Development and Validation

Sentiment Analysis Using a Large Language Model–Based Approach to Detect Opioids Mixed With Other Substances Via Social Media: Method Development and Validation

They applied 3 variants of the pretrained bidirectional encoder representations from transformers (BERT) model, namely m BERT, Sci BERT, and Bio BERT, to generate embeddings used as features for various ML classifiers, including decision trees (DTs) and DL models. Model performance was assessed using precision, recall, and F1-score metrics.

Muhammad Ahmad, Ildar Batyrshin, Grigori Sidorov

JMIR Infodemiology 2025;5:e70525

Detecting Redundant Health Survey Questions by Using Language-Agnostic Bidirectional Encoder Representations From Transformers Sentence Embedding: Algorithm Development Study

Detecting Redundant Health Survey Questions by Using Language-Agnostic Bidirectional Encoder Representations From Transformers Sentence Embedding: Algorithm Development Study

A range of models has been explored, such as convolutional neural networks [19]; transformer-based architectures such as bidirectional encoder representations from transformers (BERT), robustly optimized BERT approach, and XLNet [20-22]; and the Siamese network [23]. Despite their promising performance, most of these models have been limited to single-language settings—predominantly English [20-22] or Chinese [19,23].

Sunghoon Kang, Hyewon Park, Ricky Taira, Hyeoneui Kim

JMIR Med Inform 2025;13:e71687

Trajectory-Ordered Objectives for Self-Supervised Representation Learning of Temporal Healthcare Data Using Transformers: Model Development and Evaluation Study

Trajectory-Ordered Objectives for Self-Supervised Representation Learning of Temporal Healthcare Data Using Transformers: Model Development and Evaluation Study

Med-BERT [11] introduced a length of stay auxiliary prediction task to enrich contextual information about the severity of patients’ health conditions. CEHRT-BERT [16] and Claim-PT [27] incorporated visit type predictions (eg, inpatient and outpatient visits) to represent external domain knowledge into the model, mitigating the effect of sparse codes based on the observation that different medical concepts are associated with different visit types.

Ali Amirahmadi, Farzaneh Etminani, Jonas Björk, Olle Melander, Mattias Ohlsson

JMIR Med Inform 2025;13:e68138

Impact of a Virtual Reality Video ("A Walk-Through Dementia") on YouTube Users: Topic Modeling Analysis

Impact of a Virtual Reality Video ("A Walk-Through Dementia") on YouTube Users: Topic Modeling Analysis

API: application programming interface; BERT: Bidirectional Encoder Representations from Transformers. The analyzed video comments are from a series of 360° videos titled “A Walk-Through Dementia,” the most viewed videos under the search terms “dementia” and “Alzheimer disease.” Developed by Alzheimer’s Research United Kingdom, this series aims to raise awareness about the impact of dementia on individuals’ lives by engaging the public, health care professionals, and caregivers.

Xiaoli Li, Xiaoyu Liu, Cheng Yin, Sandra Collins, Eman Alanazi

JMIR Form Res 2025;9:e67755

Automated Radiology Report Labeling in Chest X-Ray Pathologies: Development and Evaluation of a Large Language Model Framework

Automated Radiology Report Labeling in Chest X-Ray Pathologies: Development and Evaluation of a Large Language Model Framework

On the other hand for methods using BERT based models, the models are limited by the inherent limitations of BERT models such as their noncausal nature and limited context length. BERT-based models, despite their effectiveness in text classification tasks, have two key architectural limitations that constrain their performance in radiology report labeling. First, BERT’s bidirectional nature focuses on context aggregation but lacks the ability to model causal relationships in sequential data.

Abdullah Abdullah, Seong Tae Kim

JMIR Med Inform 2025;13:e68618

Identifying Patient-Reported Care Experiences in Free-Text Survey Comments: Topic Modeling Study

Identifying Patient-Reported Care Experiences in Free-Text Survey Comments: Topic Modeling Study

We found that researchers have taken different approaches to topic modeling of patient-reported experience, including latent Dirichlet allocation (LDA), nonnegative matrix factorization, Top2 Vec, and BERT (bidirectional encoder representations from transformers). Many of these studies trained new models or otherwise involved what could be considered a high degree of model tuning.

Brian Steele, Paul Fairie, Kyle Kemp, Adam G D'Souza, Matthias Wilms, Maria Jose Santana

JMIR Med Inform 2025;13:e63466

Teenager Substance Use on Reddit: Mixed Methods Computational Analysis of Frames and Emotions

Teenager Substance Use on Reddit: Mixed Methods Computational Analysis of Frames and Emotions

Initially, the BERT (Bidirectional Encoder Representations from Transformers)–based topic modeling algorithm identified topic clusters by capturing semantic relationships and patterns within the text. These clusters were provisionally labeled based on the most representative terms and phrases extracted by the algorithm.

Xinyu Zhang, Jianfeng Zhu, Deric R Kenne, Ruoming Jin

J Med Internet Res 2025;27:e59338

Autonomous International Classification of Diseases Coding Using Pretrained Language Models and Advanced Prompt Learning Techniques: Evaluation of an Automated Analysis System Using Medical Text

Autonomous International Classification of Diseases Coding Using Pretrained Language Models and Advanced Prompt Learning Techniques: Evaluation of an Automated Analysis System Using Medical Text

Coutinho and Martins [14] proposed a BERT model with a fine-tuning method for automatic ICD-10 coding of death certificates based on free-text descriptions and associated documents. Additionally, Yan et al [15] introduced Rad BERT, an ensemble model combining BERT-base, Clinical-BERT, the robustly optimized BERT pretraining approach (Ro BERTa), and Bio Med-Ro BERTa adapted for radiology.

Yan Zhuang, Junyan Zhang, Xiuxing Li, Chao Liu, Yue Yu, Wei Dong, Kunlun He

JMIR Med Inform 2025;13:e63020

Large Language Models for Mental Health Applications: Systematic Review

Large Language Models for Mental Health Applications: Systematic Review

This criterion encompasses models such as GPT (Open AI) and Bidirectional Encoder Representations from Transformers (BERT; Google AI). Although the standard BERT model, with only 0.34 billion parameters [29], does not meet the traditional criteria for “large,” its sophisticated bidirectional design and pivotal role in establishing new natural language processing (NLP) benchmarks justify its inclusion among notable LLMs [30].

Zhijun Guo, Alvina Lai, Johan H Thygesen, Joseph Farrington, Thomas Keen, Kezhi Li

JMIR Ment Health 2024;11:e57400

Fine-Tuned Bidirectional Encoder Representations From Transformers Versus ChatGPT for Text-Based Outpatient Department Recommendation: Comparative Study

Fine-Tuned Bidirectional Encoder Representations From Transformers Versus ChatGPT for Text-Based Outpatient Department Recommendation: Comparative Study

To address this issue, we developed Korean Medical BERT (KM-BERT), a medical domain–specific pretrained BERT model, which was trained on a corpus of 6 million sentences from medical textbooks, health information news, and medical research papers [17]. Furthermore, we developed the fine-tuned KM-BERT model capable of recommending medical specialties based on general user queries [18]. Comparing these models can reveal which types of tasks each model is better suited to in the health care domain.

Eunbeen Jo, Hakje Yoo, Jong-Ho Kim, Young-Min Kim, Sanghoun Song, Hyung Joon Joo

JMIR Form Res 2024;8:e47814