Search Articles

View query in Help articles search

Search Results (1 to 10 of 18 Results)

Download search results: CSV END BibTex RIS

CSV download: Download all 18 search results (up to 5,000 articles maximum)

Lung Cancer Diagnosis From Computed Tomography Images Using Deep Learning Algorithms With Random Pixel Swap Data Augmentation: Algorithm Development and Validation Study

Lung Cancer Diagnosis From Computed Tomography Images Using Deep Learning Algorithms With Random Pixel Swap Data Augmentation: Algorithm Development and Validation Study

We conducted comprehensive experiments to validate the effectiveness of the proposed RPS technique in enhancing DL model performance across both CNN and transformer architectures. For our evaluation, we selected 4 established models: Res Net-34 [52], Mobile Net V3 (small variant) [53], Vision Transformer (base-16) [23], and Swin Transformer (tiny version) [29], all initialized with preactivated weights.

Ayomide Adeyemi Abe, Mpumelelo Nyathi

JMIR Bioinform Biotech 2025;6:e68848


Multicriteria Optimization of Language Models for Heart Failure With Preserved Ejection Fraction Symptom Detection in Spanish Electronic Health Records: Comparative Modeling Study

Multicriteria Optimization of Language Models for Heart Failure With Preserved Ejection Fraction Symptom Detection in Spanish Electronic Health Records: Comparative Modeling Study

To what extent can encoder-based Transformer models support the early identification of HFp EF symptomatology indicative of cardiac amyloidosis in Spanish-language clinical narratives?

Jacinto Mata, Victoria Pachón, Ana Manovel, Manuel J Maña, Manuel de la Villa

J Med Internet Res 2025;27:e76433


Trajectory-Ordered Objectives for Self-Supervised Representation Learning of Temporal Healthcare Data Using Transformers: Model Development and Evaluation Study

Trajectory-Ordered Objectives for Self-Supervised Representation Learning of Temporal Healthcare Data Using Transformers: Model Development and Evaluation Study

In this study, we used a multihead attention transformer encoder, drawing inspiration from BERT [8,9]. The model architecture, illustrated in Figure 3, includes an embedding module, a multihead attention transformer encoder, a feed-forward layer, and 2 classifier heads. The embedding module integrates medical codes with their associated temporal information.

Ali Amirahmadi, Farzaneh Etminani, Jonas Björk, Olle Melander, Mattias Ohlsson

JMIR Med Inform 2025;13:e68138


Enhancing Antidiabetic Drug Selection Using Transformers: Machine-Learning Model Development

Enhancing Antidiabetic Drug Selection Using Transformers: Machine-Learning Model Development

The field of ML has undergone significant advancement since the introduction of the transformer approach in 2017 [28,29]. Transformers enable contextual interactions for natural language processing tasks and have become a core technology across diverse domains [30-32]. The transformer model incorporates an attention mechanism and has shown remarkable performance in tasks involving the extraction of temporal and semantic relationships, leading to success in tasks such as generation and classification [33].

Hisashi Kurasawa, Kayo Waki, Tomohisa Seki, Eri Nakahara, Akinori Fujino, Nagisa Shiomi, Hiroshi Nakashima, Kazuhiko Ohe

JMIR Med Inform 2025;13:e67748


Experience of Cardiovascular and Cerebrovascular Disease Surgery Patients: Sentiment Analysis Using the Korean Bidirectional Encoder Representations from Transformers (KoBERT) Model

Experience of Cardiovascular and Cerebrovascular Disease Surgery Patients: Sentiment Analysis Using the Korean Bidirectional Encoder Representations from Transformers (KoBERT) Model

The tokenized input was then processed through the Ko BERT transformer encoder, followed by a fully connected classification layer to predict the sentiment as positive, neutral, or negative. For training, we used the Adam optimizer with a learning rate of 5×10-5, a warmup ratio of 0.1, and gradient clipping with a max norm of 1 to stabilize learning. The model was trained for 3 epochs with a batch size of 64. To evaluate computational performance, the training and inference times were measured.

Hocheol Lee, Yu Seong Hwang, Ye Jun Kim, Yukyung Park, Heui Sug Jo

JMIR Med Inform 2025;13:e65127


Transformer-Based Language Models for Group Randomized Trial Classification in Biomedical Literature: Model Development and Validation

Transformer-Based Language Models for Group Randomized Trial Classification in Biomedical Literature: Model Development and Validation

Pretrained transformer language models, like bidirectional encoder representations from transformers (BERT) [16-18], have outperformed the existing deep neural network models, including convolutional neural networks and recurrent neural networks. Examples of transformer-based models trained on biomedical data include Bio BERT [19], Bio Link BERT [20], Blue BERT [21], and Bio Med BERT [22], which are pretrained on biomedical literature and clinical text.

Elaheh Aghaarabi, David Murray

JMIR Med Inform 2025;13:e63267


Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes

Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes

The technique of modeling words in a large context has been referred to as transformer-based large language modeling [8]. This may not only facilitate the automatic analysis of large amounts of text data [9,10] but, by modeling words in a large context, also allow the generation of meaningful text and the interactive use of this technology [5,10]. Thus, the application of LLMs may improve efficiency and effectiveness of data processing in various fields—including health care [5].

Rebekka Schnepper, Noa Roemmel, Rainer Schaefert, Lena Lambrecht-Walzinger, Gunther Meinlschmidt

JMIR Ment Health 2025;12:e57986


Transformers for Neuroimage Segmentation: Scoping Review

Transformers for Neuroimage Segmentation: Scoping Review

In this review, we focused on the deep transformer–based techniques that have gained more attention recently. From the proposed models, we can find transformer-based, CNN with transformer-based, and generative adversarial network with transformer-based techniques. At the same time, the methods based on Trans BTS, Trans UNet, Sein UNet, and U-Net with transformer are the most used models for neuroimage segmentation. Figure 3 illustrates these models in terms of architecture.

Maya Iratni, Amira Abdullah, Mariam Aldhaheri, Omar Elharrouss, Alaa Abd-alrazaq, Zahiriddin Rustamov, Nazar Zaki, Rafat Damseh

J Med Internet Res 2025;27:e57723


Multifaceted Natural Language Processing Task–Based Evaluation of Bidirectional Encoder Representations From Transformers Models for Bilingual (Korean and English) Clinical Notes: Algorithm Development and Validation

Multifaceted Natural Language Processing Task–Based Evaluation of Bidirectional Encoder Representations From Transformers Models for Bilingual (Korean and English) Clinical Notes: Algorithm Development and Validation

Zhang and Jankowski [25] proposed average pooling transformer layers handling token-, sentence-, and document-level embeddings for classifying International Classification of Diseases codes. Their model outperformed the BERT-base model by 11 points. For the reading comprehension task, BERT can be used to determine the answer span within a given text. Pampari et al [26] proposed the electronic medical record question answering (emr QA) dataset to determine the answer span to a question in a clinical context.

Kyungmo Kim, Seongkeun Park, Jeongwon Min, Sumin Park, Ju Yeon Kim, Jinsu Eun, Kyuha Jung, Yoobin Elyson Park, Esther Kim, Eun Young Lee, Joonhwan Lee, Jinwook Choi

JMIR Med Inform 2024;12:e52897


Enhancing Type 2 Diabetes Treatment Decisions With Interpretable Machine Learning Models for Predicting Hemoglobin A1c Changes: Machine Learning Model Development

Enhancing Type 2 Diabetes Treatment Decisions With Interpretable Machine Learning Models for Predicting Hemoglobin A1c Changes: Machine Learning Model Development

Since its introduction in 2017, the transformer model has excelled in various time-series predictive tasks, solidifying its position as a core technology across multiple fields [27-32]. The transformer model incorporates an attention mechanism simplifying the extraction of temporal relationships and setting it apart from other models [33-35].

Hisashi Kurasawa, Kayo Waki, Tomohisa Seki, Akihiro Chiba, Akinori Fujino, Katsuyoshi Hayashi, Eri Nakahara, Tsuneyuki Haga, Takashi Noguchi, Kazuhiko Ohe

JMIR AI 2024;3:e56700