%0 Journal Article %@ 2561-326X %I JMIR Publications %V 8 %N %P e56788 %T Data Verification and Respondent Validity for a Web-Based Sexual Health Survey: Tutorial %A Parker,Jayelin N %A Rager,Theresa L %A Burns,Jade %A Mmeje,Okeoma %K sexually transmitted infections %K adolescent and young adults %K sexual health %K recruitment %K survey design %K social media %K data verification %K web-based surveys %K data integrity %K social media advertisements %K online advertisements %K STI %K STD %K sexual health survey %K sexually transmitted disease %D 2024 %7 9.12.2024 %9 %J JMIR Form Res %G English %X Background: As technology continues to shape the landscape of health research, the utilization of web-based surveys for collecting sexual health information among adolescents and young adults has become increasingly prevalent. However, this shift toward digital platforms brings forth a new set of challenges, particularly the infiltration of automated bots that can compromise data integrity and the reliability of survey results. Objective: We aimed to outline the data verification process used in our study design, which employed survey programming and data cleaning protocols. Methods: A 26-item survey was developed and programmed with several data integrity functions, including reCAPTCHA scores, RelevantID fraud and duplicate scores, verification of IP addresses, and honeypot questions. Participants aged 15‐24 years were recruited via social media advertisements over 7 weeks and received a US $15 incentive after survey completion. Data verification occurred through a 2-part cleaning process, which removed responses that were incomplete, flagged as spam by Qualtrics, or from duplicate IP addresses, or those that did not meet the inclusion criteria. Final comparisons of reported age with date of birth and reported state with state inclusion criteria were performed. Participants who completed the study survey were linked to a second survey to receive their incentive. Responses without first and last names and full addresses were removed, as were those with duplicate IP addresses or the exact same longitude and latitude coordinates. Finally, IP addresses used to complete both surveys were compared, and consistent responses were eligible for an incentive. Results: Over 7 weeks, online advertisements for a web-based survey reached 1.4 million social media users. Of the 20,585 survey responses received, 4589 (22.3%) were verified. Incentives were sent to 462 participants; of these, 14 responses were duplicates and 3 contained discrepancies, resulting in a final sample of 445 responses. Conclusions: Confidential web-based surveys are an appealing method for reaching populations—particularly adolescents and young adults, who may be reluctant to disclose sensitive information to family, friends, or clinical providers. Web-based surveys are a useful tool for researchers targeting hard-to-reach populations due to the difficulty in obtaining a representative sample. However, researchers face the ongoing threat of bots and fraudulent participants in a technology-driven world, necessitating the adoption of evolving bot detection software and tailored protocols for data collection in unique contexts. %R 10.2196/56788 %U https://formative.jmir.org/2024/1/e56788 %U https://doi.org/10.2196/56788 %0 Journal Article %@ 2561-326X %I JMIR Publications %V 8 %N %P e62963 %T Describing the Framework for AI Tool Assessment in Mental Health and Applying It to a Generative AI Obsessive-Compulsive Disorder Platform: Tutorial %A Golden,Ashleigh %A Aboujaoude,Elias %+ Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Rd, Stanford, CA, 94304, United States, 1 650 498 9111, eaboujaoude@stanford.edu %K artificial intelligence %K ChatGPT %K generative artificial intelligence %K generative AI %K large language model %K chatbots %K machine learning %K digital health %K telemedicine %K psychotherapy %K obsessive-compulsive disorder %D 2024 %7 18.10.2024 %9 Tutorial %J JMIR Form Res %G English %X As artificial intelligence (AI) technologies occupy a bigger role in psychiatric and psychological care and become the object of increased research attention, industry investment, and public scrutiny, tools for evaluating their clinical, ethical, and user-centricity standards have become essential. In this paper, we first review the history of rating systems used to evaluate AI mental health interventions. We then describe the recently introduced Framework for AI Tool Assessment in Mental Health (FAITA-Mental Health), whose scoring system allows users to grade AI mental health platforms on key domains, including credibility, user experience, crisis management, user agency, health equity, and transparency. Finally, we demonstrate the use of FAITA-Mental Health scale by systematically applying it to OCD Coach, a generative AI tool readily available on the ChatGPT store and designed to help manage the symptoms of obsessive-compulsive disorder. The results offer insights into the utility and limitations of FAITA-Mental Health when applied to “real-world” generative AI platforms in the mental health space, suggesting that the framework effectively identifies key strengths and gaps in AI-driven mental health tools, particularly in areas such as credibility, user experience, and acute crisis management. The results also highlight the need for stringent standards to guide AI integration into mental health care in a manner that is not only effective but also safe and protective of the users’ rights and welfare. %M 39423001 %R 10.2196/62963 %U https://formative.jmir.org/2024/1/e62963 %U https://doi.org/10.2196/62963 %U http://www.ncbi.nlm.nih.gov/pubmed/39423001 %0 Journal Article %@ 2561-326X %I JMIR Publications %V 8 %N %P e58627 %T Best Practices for Designing and Testing Behavioral and Health Communication Interventions for Delivery in Private Facebook Groups: Tutorial %A Pagoto,Sherry %A Lueders,Natalie %A Palmer,Lindsay %A Idiong,Christie %A Bannor,Richard %A Xu,Ran %A Ingels,Spencer %+ Department of Allied Health Sciences, University of Connecticut, Storrs, CT, United States, 1 860 486 5917, sherry.pagoto@uconn.edu %K social media %K Facebook %K behavioral intervention %K health communication %K Facebook groups %D 2024 %7 4.9.2024 %9 Tutorial %J JMIR Form Res %G English %X Facebook, the most popular social media platform in the United States, is used by 239 million US adults, which represents 71% of the population. Not only do most US adults use Facebook but they also spend an average of 40 minutes per day on the platform. Due to Facebook’s reach and ease of use, it is increasingly being used as a modality for delivering behavioral and health communication interventions. Typically, a Facebook-delivered intervention involves creating a private group to deliver intervention content for participants to engage with asynchronously. In many interventions, a counselor is present to facilitate discussions and provide feedback and support. Studies of Facebook-delivered interventions have been conducted on a variety of topics, and they vary widely in terms of the intervention content used in the group, use of human counselors, group size, engagement, and other characteristics. In addition, results vary widely and may depend on how well the intervention was executed and the degree to which it elicited engagement among participants. Best practices for designing and delivering behavioral intervention content for asynchronous delivery in Facebook groups are lacking, as are best practices for engaging participants via this modality. In this tutorial, we propose best practices for the use of private Facebook groups for delivery and testing the efficacy of behavioral or health communication interventions, including converting traditional intervention content into Facebook posts; creating protocols for onboarding, counseling, engagement, and data management; designing and branding intervention content; and using engagement data to optimize engagement and outcomes. %M 39231426 %R 10.2196/58627 %U https://formative.jmir.org/2024/1/e58627 %U https://doi.org/10.2196/58627 %U http://www.ncbi.nlm.nih.gov/pubmed/39231426 %0 Journal Article %@ 2561-326X %I JMIR Publications %V 8 %N %P e54407 %T A Simple and Systematic Approach to Qualitative Data Extraction From Social Media for Novice Health Care Researchers: Tutorial %A Pretorius,Kelly %+ School of Health Sciences, St. Edward's University, 3001 South Congress Avenue, Austin, TX, 78704, United States, 1 (512) 448 8500, kpretori@stedwards.edu %K social media analysis %K data extraction %K health care research %K extraction tutorial %K Facebook extraction %K Facebook analysis %K safe sleep %K sudden unexpected infant death %K social media %K analysis %K systematic approach %K qualitative data %K data extraction %K Facebook %K health-related %K maternal perspective %K maternal perspectives %K sudden infant death syndrome %K mother %K mothers %K women %K United States %K SIDS %K SUID %K post %K posts %D 2024 %7 9.7.2024 %9 Tutorial %J JMIR Form Res %G English %X Social media analyses have become increasingly popular among health care researchers. Social media continues to grow its user base and, when analyzed, offers unique insight into health problems. The process of obtaining data for social media analyses varies greatly and involves ethical considerations. Data extraction is often facilitated by software tools, some of which are open source, while others are costly and therefore not accessible to all researchers. The use of software for data extraction is accompanied by additional challenges related to the uniqueness of social media data. Thus, this paper serves as a tutorial for a simple method of extracting social media data that is accessible to novice health care researchers and public health professionals who are interested in pursuing social media research. The discussed methods were used to extract data from Facebook for a study of maternal perspectives on sudden unexpected infant death. %M 38980712 %R 10.2196/54407 %U https://formative.jmir.org/2024/1/e54407 %U https://doi.org/10.2196/54407 %U http://www.ncbi.nlm.nih.gov/pubmed/38980712 %0 Journal Article %@ 2561-326X %I JMIR Publications %V 8 %N %P e51013 %T Using a Semiautomated Procedure (CleanADHdata.R Script) to Clean Electronic Adherence Monitoring Data: Tutorial %A Bandiera,Carole %A Pasquier,Jérôme %A Locatelli,Isabella %A Schneider,Marie P %+ School of Pharmaceutical Sciences, University of Geneva, Rue Michel-Servet 1, Geneva, 1205, Switzerland, 41 223795316, marie.schneider@unige.ch %K medication adherence %K digital technology %K digital pharmacy %K electronic adherence monitoring %K data management %K data cleaning %K research methodology %K algorithms %K R %K semiautomated %K code %K coding %K computer science %K computer programming %K medications %K computer script %D 2024 %7 22.5.2024 %9 Tutorial %J JMIR Form Res %G English %X Background: Patient adherence to medications can be assessed using interactive digital health technologies such as electronic monitors (EMs). Changes in treatment regimens and deviations from EM use over time must be characterized to establish the actual level of medication adherence. Objective: We developed the computer script CleanADHdata.R to clean raw EM adherence data, and this tutorial is a guide for users. Methods: In addition to raw EM data, we collected adherence start and stop monitoring dates and identified the prescribed regimens, the expected number of EM openings per day based on the prescribed regimen, EM use deviations, and patients’ demographic data. The script formats the data longitudinally and calculates each day’s medication implementation. Results: We provided a simulated data set for 10 patients, for which 15 EMs were used over a median period of 187 (IQR 135-342) days. The median patient implementation before and after EM raw data cleaning was 83.3% (IQR 71.5%-93.9%) and 97.3% (IQR 95.8%-97.6%), respectively (Δ+14%). This difference is substantial enough to consider EM data cleaning to be capable of avoiding data misinterpretation and providing a cleaned data set for the adherence analysis in terms of implementation and persistence. Conclusions: The CleanADHdata.R script is a semiautomated procedure that increases standardization and reproducibility. This script has broader applicability within the realm of digital health, as it can be used to clean adherence data collected with diverse digital technologies. %M 38776539 %R 10.2196/51013 %U https://formative.jmir.org/2024/1/e51013 %U https://doi.org/10.2196/51013 %U http://www.ncbi.nlm.nih.gov/pubmed/38776539 %0 Journal Article %@ 2561-326X %I JMIR Publications %V 7 %N %P e51202 %T Assessing Facilitator Fidelity to Principles of Public Deliberation: Tutorial %A Draucker,Claire %A Carrión,Andrés %A Ott,Mary A %A Knopf,Amelia %+ School of Nursing, Indiana University, 600 Barnhill Drive, Indianapolis, IN, 46201, United States, 1 317 274 2285, asknopf@iu.edu %K public deliberation %K deliberative democracy %K bioethics %K engagement %K theory %K process %K ethical conflict %K ethical %K ethics %K coding %K evaluation %K tutorial %K biomedical %K HIV %K HIV prevention %K HIV research %D 2023 %7 13.12.2023 %9 Tutorial %J JMIR Form Res %G English %X Public deliberation, or deliberative democracy, is a method used to elicit informed perspectives and justifiable solutions to ethically fraught or contentious issues that affect multiple stakeholder groups with conflicting interests. Deliberative events bring together stakeholders (deliberants) who are provided with empirical evidence on the central issue or concern and then asked to discuss the evidence, consider the issue from a societal perspective, and collectively work toward a justifiable resolution. There is increasing interest in this method, which warrants clear guidance for evaluating the quality of its use in research. Most of the existing literature on measuring deliberation quality emphasizes the quality of deliberants’ inputs (eg, engagement and evidence of compromise) during deliberative sessions. Fewer researchers have framed quality in terms of facilitator inputs, and these researchers tend to examine inputs that are consistent with generic group processes. The theory, process, and purpose of public deliberation, however, are distinct from those of focus groups or other group-based discussions and warrant a mechanism for measuring quality in terms of facilitator fidelity to the principles and processes of deliberative democracy. In our public deliberation on ethical conflicts in minor consent for biomedical HIV prevention research, we assessed facilitator fidelity to these principles and processes because we believe that such assessments serve as a component of a comprehensive evaluation of overall deliberation quality. We examined verbatim facilitator remarks in the deliberation transcripts and determined whether they aligned with the 6 principles of public deliberation: equal participation, respect for the opinions of others, adoption of a societal perspective, reasoned justification of ideas, expression of diverse opinions, and compromise or movement toward consensus. In this tutorial, we describe the development of a blueprint to guide researchers in assessing facilitator fidelity, share 3 templates that will assist them in the task, and describe the results of our assessment of facilitator fidelity in 1 of the 4 sites in which we conducted deliberations. %M 38090788 %R 10.2196/51202 %U https://formative.jmir.org/2023/1/e51202 %U https://doi.org/10.2196/51202 %U http://www.ncbi.nlm.nih.gov/pubmed/38090788 %0 Journal Article %@ 2561-326X %I JMIR Publications %V 7 %N %P e37269 %T An Emerging Screening Method for Interrogating Human Brain Function: Tutorial %A Sudre,Gustavo %A Bagić,Anto I %A Becker,James T %A Ford,John P %+ Brain FIT Imaging, LLC, 141 Main Street, Unadilla, NY, 13849, United States, 1 607 435 0930, gsudre@brainfitimaging.com %K screening %K brain function %K cognition %K magnetoencephalography %K MEG %K neuroimaging %K tutorial %K tool %K cognitive test %K neuroimaging %K signal %K cognitive function %D 2023 %7 27.4.2023 %9 Tutorial %J JMIR Form Res %G English %X Cognitive decline can be observed due to a myriad of causes. Clinicians would benefit from a noninvasive quantitative tool to screen and monitor brain function based on direct measures of neural features. In this study, we used neuroimaging data from magnetoencephalography (with a whole-head Elekta Neuromag 306 sensor system) to derive a set of features that strongly correlate with brain function. We propose that simple signal characteristics related to peak variability, timing, and abundance can be used by clinicians as a screening tool to investigate cognitive function in at-risk individuals. Using a minimalistic set of features, we were able to perfectly distinguish between participants with normative and nonnormative brain function, and we were also able to successfully predict participants’ Mini-Mental Test score (r=0.99; P<.001; mean absolute error=0.413). This set of features can be easily visualized in an analog fashion, providing clinicians with several graded measurements (in comparison to a single binary diagnostic tool) that can be used for screening and monitoring cognitive decline. %M 37103988 %R 10.2196/37269 %U https://formative.jmir.org/2023/1/e37269 %U https://doi.org/10.2196/37269 %U http://www.ncbi.nlm.nih.gov/pubmed/37103988