Search Articles

View query in Help articles search

Search Results (1 to 4 of 4 Results)

Download search results: CSV END BibTex RIS


Evaluating Bard Gemini Pro and GPT-4 Vision Against Student Performance in Medical Visual Question Answering: Comparative Case Study

Evaluating Bard Gemini Pro and GPT-4 Vision Against Student Performance in Medical Visual Question Answering: Comparative Case Study

GPT-4 1106 Vision Preview had a correct answer rate of 67.8% (609/898) for the answered questions, while Bard Gemini Pro had a correct answer rate of 46.5% (477/1026; χ2₁=87.7, P The student collective majority vote revealed 94.5% (1011/1070) correctly answered questions, surpassing all other models and the student passed mean (GPT-4 1106 Vision Preview vs student majority vote: χ2₁=408.5, P Comparative accuracy of AI models and medical students in answering image-based multiple-choice questions.

Jonas Roos, Ron Martin, Robert Kaczmarczyk

JMIR Form Res 2024;8:e57592

Comparing Public Perceptions and Preventive Behaviors During the Early Phase of the COVID-19 Pandemic in Hong Kong and the United Kingdom: Cross-sectional Survey Study

Comparing Public Perceptions and Preventive Behaviors During the Early Phase of the COVID-19 Pandemic in Hong Kong and the United Kingdom: Cross-sectional Survey Study

To elucidate these relationships, a more thorough comparative analysis is required. However, studies in different countries often use different metrics to measure the same behavior, which can lead to difficulty when interpreting the significance of heterogeneous contexts. In this study, we examined and compared public perception and adoption of preventive behaviors in response to the early phase of the COVID-19 pandemic in two different settings: Hong Kong and the United Kingdom.

Leigh Bowman, Kin On Kwok, Rozlyn Redd, Yuanyuan Yi, Helen Ward, Wan In Wei, Christina Atchison, Samuel Yeung-Shan Wong

J Med Internet Res 2021;23(3):e23231