This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.
Misleading health claims are widespread in the media, and making choices based on such claims can negatively affect health. Thus, developing effective learning resources to enable people to think critically about health claims is of great value. Serious games can become an effective learning resource in this respect, as they can affect motivation and learning.
This study aims to document how user insights and input can inform the concept and development of a serious game application in critical thinking about health claims in addition to gathering user experiences with the game application.
This was a mixed methods study in 4 successive phases with both qualitative and quantitative data collected in the period from 2020-2022. Qualitative data on design and development were obtained from 4 unrecorded discussions, and qualitative evaluation data were obtained from 1 recorded focus group interview and 3 open-ended questions in the game application. The quantitative data originate from user statistics. The qualitative data were analyzed thematically, and user data were analyzed using nonparametric tests.
The first unrecorded discussion revealed that the students’ (3 participants’) assessment of whether a claim was reliable or not was limited to performing Google searches when faced with an ad for a health intervention. On the basis of the acquired knowledge of the target group, the game’s prerequisites, and the technical possibilities, a pilot of the game was created and reviewed question by question in 3 unrecorded discussions (6 participants). After adjustments, the game was advertised at the Oslo Metropolitan University, and 193 students tested the game. A correlation (
We found that user insights and inputs can be successfully used in the concept and development of a serious game that aims to engage students to think critically about health claims. The mixed methods evaluation revealed that the users experienced the game as educational and fun. Future research may focus on assessing the effect of the serious game on learning outcomes and health choices in randomized trials.
Unreliable claims about health and treatment effects are common in the media [
Because of the potential negative consequences of acting on unreliable claims, there is great value in developing effective learning resources to enable people to think critically about health claims and make informed choices [
In previous work, we have described how unreliable health claims in the media such as news stories, social media posts, and advertisements, can be used to teach students how to critically assess health claims [
The prerequisite for the game was to address the same key concepts for assessing health claims that were addressed in “Behind the headlines” [
There are several reviews of game-based learning activities that encourage learning activities by building on engagement and challenges to achieve the intended learning objectives [
This was a mixed methods study using both quantitative and qualitative data in 4 phases (
An overview of the 4 phases of the development and evaluation of the game application.
The following stages are addressed:
Assessment of experiences with health news in unrecorded discussion (concept phase).
User testing of the topic, design, and content for the game application in unrecorded discussions (development phase).
Assessment of user patterns of the game application using user data (implementation phase).
Assessment of the satisfaction with the game application using a questionnaire in the game application and a recorded focus group interview (FGI; evaluation phase).
In January 2021, students’ experiences with health news in the media were used as the basis for the development of the game. After the game was developed between January 2021 and October 2021, students’ and the reference group’s experiences with a pilot of the game were used to finalize the game. The game was launched as a contest where students could win prizes from November 26 to December 14, 2021, and quantitative data from the implemented game were used to assess the uptake and use patterns of the game. Finally, students’ experiences with the implemented game were evaluated using both data from open-ended questions in the game and a recorded FGI conducted in April 2022.
Data in phases 1 and 2 were collected without audio recording and did not contain personal or indirect personal data. According to the guidelines of the Norwegian Centre for Research Data, approval from them was therefore not necessary. We applied for approval for the audio recording of the FGI in phase 4 from the Norwegian Centre for Research Data (reference number 910105). For all phases, we followed the research ethics guidelines of Oslo Metropolitan University (OsloMet) [
A reference group consisting of academics from OsloMet, students from various study programs at OsloMet, and representatives from the game developer (sikresiden.no) helped plan and develop the game application. However, the representatives from the game developer were not involved in the data analyses.
Eligible participants for phase 1 were a pool of volunteer students from various Norwegian institutions of higher education who were interested in contributing to game development. Invitations to participate in an unrecorded discussion were sent to everyone in the pool between December 21, 2020, and January 25, 2021. The participants were offered a gift card of NOK 500 (US $46.9) to participate in the discussion.
In phase 2, eligible participants were a pool of volunteer students at OsloMet who were interested in participating in the research. A request for participants for unrecorded discussions was sent by MM on April 19, 2021 (
Eligible participants for phases 3 and 4 were Norwegian-speaking students at OsloMet. In the implementation phase (phase 3) in autumn 2021, 2 research assistants were involved in marketing activities at OsloMet, for example, marketing short video clips to recruit students to try the game through the university’s social media channels (Instagram, Snapchat, and Facebook). Everyone who completed the game was entered into a lucky draw for 4 gift cards of NOK 500 (US $46.9) each, and the one with the highest score won a grand prize (wireless headphones worth NOK 5000 (US $469.4). Recruitment for recorded FGI in the evaluation phase (phase 4) took place in February and March 2022. Information about the FGI was posted on notice boards around the OsloMet campus in Oslo, and flyers were distributed to students at several campus cafeterias on 3 separate occasions. The students were offered a gift card of NOK 500 (US $46.9) to participate in the FGI.
Qualitative data in phase 1 were collected through an unrecorded discussion via Zoom (Zoom Video Communications) that lasted for 1.5 hours and followed a semistructured interview guide (
In phase 2, qualitative data were collected through three 1.5-hour unrecorded discussions via Zoom, where the game was demonstrated question by question (see
Examples of game elements (in Norwegian) assessed in phase 2.
Quantitative user data were collected during the implementation phase (phase 3). The data set was provided by the app developer at the request of the researchers after the competition ended in December 2021. These data constituted the quantitative data for the implementation phase (phase 3).
The game also contained a short evaluation questionnaire that appeared after the game was finished. Answering these questions was voluntary, and the questions only appeared after the first game round. The questionnaire was anonymous and administered by the app developer. The anonymous reports were handed out at the request of the researchers and constituted qualitative data for the evaluation phase (phase 4). The questionnaire consisted of 3 questions: (1)
To collect qualitative data from the recorded FGI in phase 4, Rosenbaum’s adaption of Morville honeycomb framework for user-experience design was used to develop the interview guide [
Nonparametric tests were used to analyze user data in phase 3 because the distribution was not normally distributed. Descriptive data are presented as median (range). Spearman rho was used to test the correlation between the number of replays and total points scored in the game, and the independent-samples Kruskal-Wallis test was used to test differences in points scored between faculty affiliations.
Analysis of the unrecorded discussions in phases 1 and 2 and analysis of the open-ended questions 1 and 3 in the game survey in phase 4 was guided by a thematic analysis [
For the phase 4 FGI, the findings in each facet of the honeycomb framework were summarized in tabular form and supplemented with quotes. IKOE drafted the table, whereas MM and LGH double-checked the findings against quotes. In addition, the FGI transcript was analyzed thematically [
Three students (2 women and 1 man) from different fields of study and study sites in Norway provided data on their experiences with health news. The discussion revealed that the students use a variety of media sources when reading health news: traditional newspapers (“I steal the paper when I visit my parents”), web-based newspapers (“I often read the news on my phone”), and social media (“It is much click-baits there”). When asked how they assessed whether health news is reliable or not, the students said that they had higher trust in news from the government than in the mainstream media or newspapers. They also said they had less trust in health news from commercial media (“I have less trust in paid news”).
In the discussion, the participants were shown some paid health “news” shared by influencers on Instagram and so on, and asked how they would assess their trustworthiness. They said that they were skeptical of, for example, websites that appeared to have little content that was shared by influencers or otherwise were commercial. However, it appeared that the students did not have any strategy other than “to google it” to assess whether the health news was reliable or not. One participant said, “I would not have checked the references or sources for this health news.” Two of the informants also said that they had been fooled to buy products that were aggressively advertised on social media: “You want it to work, and that is why you buy the product.” None of the participants mentioned assessing the evidence supporting the claims.
After the unrecorded discussion, MM and the representatives from the developer suggested questions for the game based on the discussion with students and the types of tasks that were technically possible. The proposed questions covered several topics and included case-based questions based on real health claims in social or other media, and the proposals were presented to the reference group.
A meeting with the reference group was held where the proposals were reviewed. The reference group decided that it would be reasonable to concentrate all questions on only one topic to make the game easier to grasp. The goal was to choose a topic that was relevant for all students regardless of their gender, age, or study specialization. The reference group concluded that the topic “dietary supplements” could fit all these requirements. The continued work was based on this idea, and a pilot of the game was created.
The overall impression of the game, based on 2 unrecorded discussions with students and 1 with the study’s reference group, was that it was useful, relevant, and educational; had clear, varied questions; and looked nice. The participants further expressed that learning through a game was fun, earning points was motivating, and that the competitive element was exciting. The perceived learning outcomes included both critical thinking skills and factual knowledge.
When assessing each of the game elements, suggestions for improvements in some of the tasks included extending the time limit, minor rewording, and larger text.
Overall
The whole game is positive—very good.
Very cool! The questions were clear, and they were varied.
It is fun to play, and it looks nice.
Learning through game
Learning through games is nice.
Liked that you had to think a bit before you answer.
Perceived learning outcomes
How to look at an advertisement: the first thing you think: is this a scam?
Must be more critical of what influencers say.
Research done is not always true.
That Vitamin C does not prevent colds.
Learned a bit more about dietary supplements that you do not think about otherwise.
Motivational elements
You had fun playing.
The most fun were those that had a short time frame.
The bubble puzzle was fun.
You want to play more to get a higher score.
High score ranking makes it motivating to play several times.
In total, 227 students registered in the game application during the implementation phase (phase 3). However, of these 227 students, 34 (15%) students did not test the game and were therefore excluded from the analysis, resulting in 193 (85%) students who tested the game. Of these 193 students, 70 (36.3%) students chose to answer the voluntary evaluation questions after the game.
In the game, it was possible to earn stars based on the percentage of correct answers given (1 star=1%-24%; 2 stars=25%-49%; 3 stars=50%-79%; 4 stars=80%-99%; and 5 stars=100%). All players achieved at least 3 stars, and 87% (168/193) of the students earned a minimum of 4 stars. Those achieving 3 stars (25/193, 13%) had a median replay of 1 (range 1-4), those achieving 4 stars (109/193, 56.5%) had a median replay of 1 (range 1-35), and finally, those achieving 5 stars (59/193, 30.5%) had a median replay of 5 (range 1-296;
There was no statistically significant difference (
Median scores of user data.
Stars earned (% correct answers) | Values, n (%) | Number replays, median (range) | Points, median (range) |
☆ (1-24) | 0 (0) | 0 (0) | 0 (0) |
☆ ☆ (25-49) | 0 (0) | 0 (0) | 0 (0) |
☆ ☆ ☆ (50-79) | 25 (13) | 1 (1-4) | 3500 (2775-3725) |
☆ ☆ ☆ ☆ (80-99) | 109 (56.5) | 1 (1-35) | 4425 (3725-5108) |
☆ ☆ ☆ ☆ ☆ (100) | 59 (30.5) | 5 (1-296) | 5223 (5000-5579) |
Total | 193 (100) | 2 (1-296) | 4531 (2775-5579) |
Total points achieved in the game at the different faculties.
Responses to the 2 open-ended questions “What do you think about this way of learning?” and “Do you have any other comments or suggestions for the game?” were brief. Almost half of the responses consisted of just one word, and the longest response was 2 sentences. The results from these 2 questions (Q1 and Q3) were analyzed and coded into 4 main themes: (1) “Experience of performing the game,” (2) “Experience of learning,” (3) “Feedback on the design,” and (4) “Suggestions for improvement.”
The students used words such as “fun,” “interesting,” and “educational” to describe their experiences with the game. They further used words such as “motivating,” “effective,” and “engaging” related to the learning experience. The students’ perception of the design was expressed in words such as “nice,” “user-friendly,” “colorful,” and “varied”; however, 1 participant mentioned that the font was small in some of the tasks. Suggestions for improvement varied from adding more games and modules, having available sources, and adding more difficult questions to adding a small summary about the game at the start of the game. A detailed overview of the themes and codes can be found in
The respondent wrote up to 3 words in response to the question “Describe what you think was positive about the game in up to three words” (Q2). To visualize the responses, a word cloud was generated where the words with the largest size appeared most frequently. The most frequent words used were fun (20 times), varied (11 times), informative (9 times), simple (9 times), easy (8 times), educational (6 times), and interesting (6 times;
Six students showed interest in participating in the recorded FGI evaluation and were provided with a link to a consent form. The participants signed the form before taking part in the FGI, but 2 participants did not, that is, they dropped out without giving a reason.
On April 8, 2022, the recorded FGI was conducted with 4 women, 2 of whom were from the Faculty of Health Sciences and 2 from the Faculty of Social Sciences. One of the participants was a master’s student, whereas the others were bachelor’s students. One participant tested the game in the introduction phase (phase 3), whereas the others had not tested the game before. All participants spoke Norwegian.
Although the data collection followed the interview guide based on Rosenbaum’s honeycomb framework (the results can be found in
The informants expressed that they had experienced the game design as varied and fun with different task layouts and colors, and they liked the immediate feedback on whether the answers were correct. However, they found the design of the web-based game solution, with a mobile screen image, to be suboptimal because of small letters. On the basis of their previous knowledge of source criticism, the judgment of whether the information is likely or not, and knowledge of research, they found the game trustworthy. They were positive about using games as learning tools for critical thinking and reasoning. They further found the game useful in education, writing assignments, and in everyday life, but expressed concerns about the academic level of the game being slightly too low. They believed that the target group could be anything from university to primary school, but that there should be various task levels depending on the target group. Other suggestions for further development included having an introductory text explaining the aim of the game, learning stops along the way, and references and sources at the end. However, they still wanted to maintain a game element in the form of points. Suggestions for expanding the topic of the game included politics, climate, nutrition, physical activity, and how to read and interpret research.
In this study, we explored students’ experiences with assessing health news in the media as a basis for the development of a serious game to increase critical thinking about health claims and found that users’ insights and inputs can be successfully used to inform the concept and development of such a game. The students’ user patterns and evaluations of the game revealed that the users experienced the game as educational, fun, and engaging. The mixed methods design helps to better understand user experiences and makes the process replicable for those who aim to develop serious games in critical thinking about claims in other disciplines, such as climate and sustainability. In the following sections, we discuss details of the main findings of this study before a general discussion, in which we compare the results with previous studies.
The initial unrecorded discussion in our study revealed that the students were skeptical of health news shared by influencers or otherwise initiated or spread by commercial actors. Nevertheless, their assessment of whether a claim was trustworthy was limited to Google searches. The disadvantage of this strategy is that, in particular, information on treatment outcomes may be incomplete, written by nonexperts, or have commercial goals [
In total, 87% (168/193) of participants achieved a minimum of 4 out of 5 stars, which may indicate that the degree of difficulty was somewhat low. This was supported by the findings in the open-ended questions in the game where words such as “simple” and “easy” were used, and from the FGI where one participant expressed, “I think perhaps that it was almost too easy.” A detectable correlation was found between the number of replays and total points obtained in the game, suggesting that the game may be suitable for practicing concepts. This was supported by findings from the FGI, where one participant expressed, “[…] it is good that you could learn new concepts.” It may seem that the topic of dietary supplements was perceived as generic, as there was no demonstrable difference between the total scores of students from different faculties. This was reflected in the FGI, where both students from the Faculty of Social Science and the Faculty of Health Sciences found the topic equally interesting.
Joy and higher concentration have been shown to occur when students perceive clear goals, ease of use, and usefulness during gaming [
Although we described user experiences with serious games, we will draw on the literature based on gamification to widen our scope, as both these terms are centered on game elements. In addition, as stated in a review of serious games for health professions, the term “serious games” is not well established; therefore, previous studies on the topic do not necessarily use this term [
Zainuddin et al [
Silva et al [
A different perspective in relation to serious games was used in a study by Zairi et al [
Our study has some limitations. Our first discussion in phase 1 was not audio recorded, which may raise questions of bias toward using only agenda-appropriate data. In addition, owing to the relatively low number of participants, we can discuss whether we achieved information power. Information power in qualitative research means that the more study-relevant the information the participants have, the lower the number of participants required [
Our goal with the game was to improve critical thinking among students, but it can be questioned whether the tasks in the game were sufficiently designed to develop critical thinking or whether they mainly assessed factual knowledge of the effects of dietary supplements. On the basis of our results, however, it appears that the various questions helped the students increase their awareness of the assessment of health claims in the media, and thus their critical thinking.
Finally, the study aimed to obtain user experiences from concept, development, and implementation to evaluation; thus, it was not possible to draw conclusions about the effect of the game in a learning setting. Randomized trials are necessary to assess the effects of the game.
In qualitative research, including a mixed methods approach, researchers are part of the research process, and we recognize that our previous experiences, assumptions, and beliefs influence it. MM, LGH, and LH are all professors at OsloMet and IKOE is currently a PhD candidate. The professors were lecturers at OsloMet, but IKOE had no formal relationship with any of the students at OsloMet. MM and LH contributed to the development of the game, and to avoid possible social desirability bias, that is, the tendency of research participants to answer what they believe is more socially acceptable rather than answering what reflects their true thoughts [
Acting on misleading health claims can adversely affect health, and there is a need for tools and learning resources that enable people to critically assess these claims. Serious games can be developed in this regard, as they can trigger and motivate students to learn more about a subject. We found that user insights and inputs can be successfully used to inform the concept and development of a serious game that aims to engage students in critical thinking about health claims. On the basis of a mixed methods evaluation of the game, we found that users experienced the game as educational, fun, and engaging. Our results can inform the design and implementation of serious games for health educational purposes related to critical thinking about claims in other disciplines where critical thinking skills are needed, such as climate, democracy, and sustainability. Future research should focus on assessing the effects of serious games on learning outcomes and health choices in randomized trials.
Recruitment to participate in an unrecorded discussion, phase 2.
Interview guide for unrecorded discussion, phase 1.
Examples of game application tasks.
Interview guide for recorded focus group interviews (FGI), phase 4.
Themes and codes for Q1 and Q3, phase 4.
Frequency of words given as responses to Q2, phase 4.
Results from recorded focus group interview (FGI), phase 4.
focus group interview
Informed Health Choices
Oslo Metropolitan University
MM, project manager for the umbrella project “Behind the headlines,” initiated the game development idea and design, contributed to data collection and analyses, and outlined the first draft of the manuscript. IKOE contributed to data collection and analysis and processed and finalized the manuscript. LH contributed to the idea and design of the game, and LGH contributed to the data collection and analyses. All coauthors reviewed the manuscript, provided critical input, and agreed on the final manuscript before submission. The authors wish to thank all students at Oslo Metropolitan University (OsloMet) and other institutions who participated in this study. We also thank research assistants Maria Uldahl and Sandra Mauer for their diligent work related to the recruitment of participants in the implementation phase. Further thanks go to Gro Røkholt, Hilde Tinderholt Myrhaug, Kristin Nordseth, Matt Oxman, and Atle Fretheim for participating in the reference group. In addition, thanks to Astrid Dahlgren who provided valuable input to the manuscript. We also want to thank sikresiden.no, a web-based app created by and for Norwegian universities, colleges, and research institutions, and their employees, Jorid Bodin, Marie Kinge, and Christine Holm Berntzen. Sikresiden had the opportunity to develop a game module on an already existing technology platform developed by Attensi, which specializes in game-based training that combines advanced 3D modeling and human psychology. OsloMet funded the development of the game application and a PhD scholarship for the first author.
The data sets generated and analyzed during this study are available from the corresponding author upon reasonable request. Qualitative data sets are available only in Norwegian.
None declared.