Search Articles

View query in Help articles search

Search Results (1 to 1 of 1 Results)

Download search results: CSV END BibTex RIS


Pretrained Transformer Language Models Versus Pretrained Word Embeddings for the Detection of Accurate Health Information on Arabic Social Media: Comparative Study

Pretrained Transformer Language Models Versus Pretrained Word Embeddings for the Detection of Accurate Health Information on Arabic Social Media: Comparative Study

Taking a stricter approach, Albalawi et al [12] and Sharma et al [13] excluded tweets on which experts disagreed in an attempt to exclude uncertainty from their data sets. Table 1 summarizes these studies. Unsurprisingly, studies that relied on expert opinion used relatively small data sets (ranging from 109 to 625 tweets) compared with studies that used other labeling methods (Table 1).

Yahya Albalawi, Nikola S Nikolov, Jim Buckley

JMIR Form Res 2022;6(6):e34834