e.g. mhealth
Search Results (1 to 4 of 4 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 2 JMIR Formative Research
- 1 JMIR Medical Informatics
- 1 Journal of Medical Internet Research
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Research Protocols
- 0 JMIR Human Factors
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Biomedical Engineering
- 0 JMIR Data
- 0 JMIR Cardio
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Aging
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

Encoder-only models, such as the BERT (Bidirectional Encoder Representations from Transformers) architecture and its variant, Ro BERTa (Robustly Optimized BERT Pretraining Approach) [15], excel at capturing relevant information from the input data and constructing meaningful representations [16-18] and can be found in current SOTA models in ICD coding [5,7]. Conversely, decoder models, exemplified by GPT, are specialized in generating coherent and contextually relevant text [19].
JMIR Form Res 2025;9:e60095
Download Citation: END BibTex RIS

To demonstrate the effectiveness of the algorithm, we constructed four models: (1) SSSS + BERT + CRF, (2) SSSS + BERT + Bi LSTM + CRF, (3) SSSS + Ro BERTa + CRF, and (4) SSSS + Ro BERTa + Bi LSTM + CRF. These were compared with BERT + CRF (baseline 1) and BERT + Bi LSTM + CRF (baseline 2). To investigate the impact of SSSS on Ro BERTa, we also performed an ablation study on the Ro BERTa + CRF and Ro BERTa + Bi LSTM + CRF models. The results for CCKS-2017 and CCKS-2019 are presented in Tables 7 and 8.
JMIR Med Inform 2024;12:e60334
Download Citation: END BibTex RIS

As part of our final sample of 345,053 reviews, the 2000 hand-coded reviews from our training data set were recoded by Ro BERTa. The interrater reliability between our hand-coding and Ro BERTa was Cohen κ =0.96 for both interpersonal manner and technical competence. Comparing the Ro BERTa codes with the original hand codes for these reviews, we found only 107 (5.4%) reviews had coding discrepancies.
J Med Internet Res 2024;26:e50236
Download Citation: END BibTex RIS

For this study, we used different transformer models, including BERT [13], Ro BERTa [37], XLM-Ro BERTa [37], and Distil BERT [38].
To measure the performance of each classifier, we used weighted average precision (P), recall (R), and F1. We used weighted metrics as they have the capability to take into account the class imbalance distribution.
JMIR Form Res 2022;6(5):e36238
Download Citation: END BibTex RIS