Published on in Vol 8 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/50817, first published .
A Serious Game for Enhancing Rescue Reasoning Skills in Tactical Combat Casualty Care: Development and Deployment Study

A Serious Game for Enhancing Rescue Reasoning Skills in Tactical Combat Casualty Care: Development and Deployment Study

A Serious Game for Enhancing Rescue Reasoning Skills in Tactical Combat Casualty Care: Development and Deployment Study

Original Paper

1Medical School of Chinese People's Liberation Army, Department of Emergency Medicine, the Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese People's Liberation Army General Hospital, Beijing, China

2Garrison Veteran Cadres Activity Center, Beijing, China

3Department of Emergency Medicine, the Third Medical Center, Chinese People's Liberation Army General Hospital, Beijing, China

4Health Service Training Center, Chinese People's Liberation Army General Hospital, Beijing, China

5Department of Emergency Medicine, the Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese People's Liberation Army General Hospital, Beijing, China

6Department of Nursing, the Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese People's Liberation Army General Hospital, Beijing, China

7Department of Nursing, the First Medical Center, Chinese People's Liberation Army General Hospital, Beijing, China

Corresponding Author:

Yuan Gao, PhD

Department of Nursing

the First Medical Center

Chinese People's Liberation Army General Hospital

No. 28 Fuxing Road

Beijing, 100853

China

Phone: 1 381 054 8731

Fax:86 106 687 6010

Email: gaoyuanzd@163.com


Background: Serious games (SGs) have emerged as engaging and instructional digital simulation tools that are increasingly being used for military medical training. SGs are often compared with traditional media in terms of learning outcomes, but it remains unclear which of the 2 options is more efficient and better accepted in the process of knowledge acquisition.

Objective: This study aimed to create and test a scenario-based system suitable for enhancing rescue reasoning skills in tactical combat casualty care.

Methods: To evaluate the effectiveness of the SGs, a randomized, observational, comparative trial was conducted. A total of 148 members from mobile medical logistics teams were recruited for training. Pre- and posttraining assessments were conducted using 2 different formats: a video-based online course (n=78) and a game simulation (n=70). We designed 3 evaluation instruments based on the first 2 levels of the Kirkpatrick model (reaction and learning) to measure trainees’ satisfaction, knowledge proficiency, and self-confidence.

Results: There were 4 elements that made up the learning path for the SGs: microcourses (video-based online courses), self-test, game simulation, and record query. The knowledge test scores in both groups were significantly higher after the intervention (t154=–6.010 and t138=–7.867, respectively; P<.001). For 5 simulation cases, the average operation time was 13.6 (SD 3.3) minutes, and the average case score was 279.0 (SD 57.6) points (from a possible total of 500 points), with a score rate of only 44% (222/500 points) to 67% (336/500 points). The results indicated no significant difference in trainees’ satisfaction between the 2 training methods (P=.04). However, the game simulation method outperformed the video-based online course in terms of learning proficiency (t146=–2.324, P=.02) and self-perception (t146=–5.492, P<.001).

Conclusions: Despite the high satisfaction reported by trainees for both training methods, the game simulation approach demonstrated superior efficiency and acceptance in terms of knowledge acquisition, self-perception, and overall performance. The developed SG holds significant potential as an essential assessment tool for evaluating frontline rescue skills and rescue reasoning in mobile medical logistics teams.

JMIR Form Res 2024;8:e50817

doi:10.2196/50817

Keywords



Serious games (SGs) are video games developed specifically to have an educational purpose [1] and can be used to train both technical and nontechnical skills [2,3]. SGs are defined as representative of nonimmersive systems that have a virtual environment accessed through a display and interactions limited to a keyboard and mouse [3]. SGs have become a useful training technology to learn about health care procedures and a perfect channel to promote learning content [4]. First aid, triage, and mass emergency are the most popular fields taking advantage of the safe and controlled environment [5] provided by virtual reality (VR), wherein games have been developed for training medical doctors or students. Prominent examples include the French Military Health Service’s SG to train for and assess forward combat casualty care (3D-SC1, 2014) [6], the US Army’s tactical combat casualty care simulation training program (TC3Sim, 2020) [7], and the Joint Theater Level Simulation (JTLS 2017) software [8], all of which integrate VR and remote instruction. These software applications furnish soldiers with immersive and repeatable learning experiences, reducing training costs and shortening training periods. As a result, these SGs hold tremendous value as military training applications.

For a long time, the medical service forces at Chinese military hospitals have had limited opportunities to practice combat casualty care and develop the specialized skills required for actual combat situations. This has exposed various issues, including the common misconception of focusing solely on skill practice without adequately addressing decision-making. Working in emergency medicine requires situational assessment and decision-making as well as initiation of appropriate emergency measures under time pressure, often under adverse external conditions and, at the same time, with little or no fault tolerance [9].

To date, published data about SG use in military medical training are limited. Experimental studies often compared learning outcomes in SGs with traditional media, but it remains unclear which of the 2 options is more efficient and better accepted in the process of knowledge acquisition [10]. Some studies have shown SGs’ superiority in specific variables related to learning or training effectiveness [9,11], while others have failed to find a statistically significant difference between these 2 training approaches in terms of learning effects [12,13]. In the study by Hu et al [14], compared with online lectures, the game-based learning approach clearly resulted in better acquisition and retention of information related to COVID-19. Similar studies [15] have also found satisfaction and motivation were greater with SGs than with traditional teaching methods. Several studies have compared the 2 methods but were characterized by a high level of heterogeneity [16] and sometimes provided neutral results [17]. Therefore, it is necessary to compare the 2 training methods. In addition, few articles reported the development process for game development [18].

Therefore, this study aimed to develop an SG and assess its impact, compared with that of a video-based online course, on the learning outcomes of members within mobile medical logistics teams. This innovative approach endeavored not only to provide a new training tool for mass casualty care but also to implement and analyze the practical application of SGs, thereby illustrating their training effectiveness and educational value.

By conducting a comparative evaluation of the video-based online course and game simulation through the constructed SG, the primary goal of this study was to enhance clinical reasoning and procedural reasoning abilities [19]. The intermediate goal was to improve the overall capacity for rescuing combat casualties, while the ultimate goal was to foster the sharing of health training resources and provide support for the rapid and effective development of mobile medical service units.


Ethical Considerations

This study obtained ethics approval from the Institutional Review Board of Chinese People’s Liberation Army (PLA) General Hospital (S2021-043-01). Informed consent was obtained from all participants.

Design and Development of the SG

Phase 1: Software Architecture and Learning Path Composition

Due to the high capacity to create photorealistic environments and visual scripting systems [20], Unreal Engine was used to develop the visual interface scene display and operation of the SG. The system leveraged the Java programming language and a MySQL database to implement the background logic and data recording aspects.

The research group consisted of 1 professor, 1 doctor, and 2 individuals with master’s degrees. The development of the SG used the Kolb Experiential Learning Cycle framework, which achieves effective learning through a cycle of 4 stages: (1) having an experience (“concrete experience”), (2) reflecting on the experience (“reflective observation”), (3) learning from the experience (“abstract conceptualization”), and (4) trying out what you have learned (“active experimentation”) [21]. During case preparation, the research group determined the injuries of the wounded, including injury type, injury position, different treatments, and further course, in each case according to the literature. For example, the Case 3 scenario, “Israeli-Palestinian Conflict,” was designed to be a large-scale air raid, and paramedics needed to rescue 3 wounded personnel with eye trauma, open pneumothorax, and detonation injuries (“concrete experience” and “reflective observation” from the Kolb framework).

During a subsequent Delphi expert consultation, the case script and standard rescue flow chart were sent to 20 experts, and the final version was finalized through 2 rounds of consultation and feedback. Furthermore, participants were given a short presentation about the cases and a review of relevant rescue skills by the teachers. Therefore, the participants learned about the cases experienced in the SG and gained further specific, structured knowledge and skills (“abstract conceptualization” from the Kolb framework).

In a simulation training session, the participants then performed video learning and game operations. The scores of the participants were determined by the software in real time. If desired, participants could also view their scores and errors themselves in the record query. By means of this simulation session, the participants can immediately transfer the newly gained insights to a practice situation such as military exercises and training (“active experimentation” in the Kolb framework).

Finally, the participants reflected in the upcoming rescue mission on the renewed experience with similar cases in a case-based matter. (“concrete experience” and “reflective observation” in the Kolb framework).

There were 4 learning path elements embedded in the SG: microcourses, self-test, game simulation, record query.

The first element, the microcourses (video-based online courses), encompassed 8 video courses that covered the entirety of the knowledge test content. The courses used a short PowerPoint presentation with step-by-step voiceover narration. The learning time was automatically accumulated as the videos were watched.

In the self-test, prior to and after simulation training, participants were required to complete general information forms, knowledge tests, confidence evaluation forms, and satisfaction questionnaires to evaluate the training effects.

In the game simulation, 5 battlefield rescue simulation training scenarios were presented through a combination of tree map options and an answer sheet. The interface displayed a rescue progress bar in the left corner, scrolling to exhibit the specific operations associated with each choice and calculating the current training score. The center area of the interface showed the answer sheet, accompanied by corresponding dialog boxes offering hints and analysis based on the selected rescue measures. Located at the top area were the MARCH principle [22] option buttons representing 5 aspects of casualty care: Massive Hemorrhage, Airway Management, Respiration/Breathing, Circulation, and Hypothermia Prevention. The “Other” options encompassed tasks such as tactical handling, fracture fixation, ocular trauma, burn care, and medical evacuation. The timing information was placed in the upper right area, with the final time accessible through the record query.

In the record query, the evaluation record presented the total score, correct rate, and errors. Training records, including lost points, total score, total time, and other data, could be reviewed in the case record.

Phase 2: Operation Process

The game simulation was based on the latest tactical combat casualty care (TCCC) guide [22] and divided into 3 stages: care under fire, tactical field care, and tactical evacuation care. Each scenario featured 3 casualties with varying degrees of injury severity. The training focused on injury classification, selection of rescue measures, and evacuation strategies. Notably, upon completion of the rescue, a standard rescue flow chart was automatically displayed, highlighting the correct treatment measures, complete procedures, and the missed procedures for which the automatic virtual instructor had to take control of the experience to perform the procedure [23]. The operation process is shown in Figure 1.

Figure 1. The serious game operation process.

Deployment Evaluation

Design and Sampling

This study was a randomized, observational, comparative study. From January 2023 to March 2023, participants were selected from the mobile medical logistics teams at the Second Medical Center of the PLA General Hospital, the Third Medical Center of the PLA General Hospital, and the Medical Service Training Center using convenience sampling.

The exclusion criterion was that the participant was not a member of the mobile medical logistics team or dropped out halfway.

This research involved software development based on a laptop computer. No headsets nor other related devices were used in this study, so there was no VR syndrome such as dizziness or headache. This study had no impact on human health and safety, so there was no trial registration number.

The required sample size for the comparison of 2 sample means was calculated according to the formula N=[(Zα/2+Zβ)σ/δ]2(Q1-1+Q2-1) [24]. Based on the differences in mean pretest knowledge test scores between the Control group (microcourses; 9.09, SD 4.944) and the Observation group (game simulation; 11.64, SD 3.392) and considering δ=2.55, σ=4.168, Zα/2=1.96, and Zβ=1.28, the total sample size in both groups was calculated to be 111.51. Considering a sample loss rate of 10%, the total sample size in both groups should be no less than 122, with no fewer than 61 samples in each group. In this study, a total of 155 subjects were initially included, but 7 were lost to follow-up, resulting in final analysis of complete data from 148 participants.

Intervention

For randomization, participants were grouped using computer-generated random numbers. A researcher (data processor) who was not informed of the purpose of the study selected the participants in a 1:1 ratio in a sequentially numbered, sealed envelope. After receiving informed consent from each participant, they were randomized by the same researcher. Usually, in web-based trials, it is not possible to blind the participants, but the researcher (outcome assessor) was blinded.

Both groups used the software in a single-player format, with the Control group taking approximately 60 minutes and the Observation group taking approximately 90 minutes. The detailed implementation of the intervention is shown in Figure 2.

Figure 2. Simulation intervention.
Instruments

The Kirkpatrick model, a widely used training evaluation model, was used in this study. The model consists of 4 levels: reaction, learning, behavior, and results [25]. Level 3 and level 4 are particularly challenging to observe in an educational learning program [26], as they require postassessment analysis to measure the application of learning in practice and the overall impact on learners. Due to the diverse backgrounds of the research participants involved in this study, it was challenging to assess how effectively learners applied their knowledge in real-time practice and evaluate the overall impact of the learning session. Therefore, evaluation of levels 3 and 4 was not conducted. We developed 3 instruments according to the Kirkpatrick model to assess participant satisfaction and self-confidence (level 1) and the acquired knowledge through pre- and posttests (level 2) to verify the effectiveness and identify shortcomings of the simulation training.

Knowledge Test

The questionnaire consisted of 2 parts. The first part assessed participant characteristics, including gender, age, major, education level, years of work, professional title, and previous TCCC training or military exercise participation. The second part was the knowledge test, comprising 25 multiple-choice questions with a total score of 100 points. Each correct answer was assigned 4 points, while an incorrect answer received 0 points. The knowledge questionnaire is available in Multimedia Appendix 1.

Self-Confidence Rating Form

This rating form measured participants’ confidence levels in basic knowledge, injury judgment, independent decision-making, and other aspects. Participants were asked to rate their confidence levels on a 5-point scale ranging from “1: Not at all” to “5: Very Much” before and after the simulation training. The self-confidence rating form is available in Multimedia Appendix 2.

Satisfaction Questionnaire

The satisfaction questionnaire was based on the simulation training satisfaction scale developed by the National League for Nursing [27] and a previous simulation training research report [28]. It consisted of 10 items related to software interface, method of use, content difficulty, and contribution to comprehensive combat casualty care abilities. A 5-point Likert scale was used for scoring. The satisfaction questionnaire is available in Multimedia Appendix 3.

Instrument Validity and Reliability

Using standardized questionnaires helps increase the validity and reliability of the conclusions, as noted by other authors [29].

The content validity of the questionnaires was assessed by 11 military and nursing specialists. Subsequently, the questionnaires underwent a pilot test with a random sample of 45 trainees to assess the software’s feasibility and acceptability, as well as the clarity and understandability of the survey questionnaires. After the pilot test, the 3 evaluation questionnaires demonstrated an acceptable level of content validity (94% agreement for the satisfaction questionnaire, 98% agreement for the knowledge test, and 93% agreement for the self-confidence rating form). In terms of reliability testing, the knowledge test exhibited a retest reliability of 92%, while the satisfaction questionnaire and self-confidence rating form achieved Cronbach α coefficients of 88% and 83%, respectively, indicating good reliability.

Quality Control

Software Application Training

The software training followed the basic principles of simulation-based training [30]: (1) 20 minutes to learn the Operation Manual (briefing and VR familiarization phase); (2) 10-minute demonstration of the location of each functional area and the operation method of the learning path (training phase); (3) 10-minute introduction to the scoring method and discussion of the common problems (training phase); (4) 15-minute explanation of the correct answers and standard rescue procedures, as well as watching case analysis videos (debriefing phase).

Data Screening

The research group verified the recovered data and eliminated invalid data. The criteria for invalid data included (1) having only baseline data without postintervention data; (2) omissions exceeding 10% of the total number of questions; (3) answer times significantly less than 50 minutes, which indicated potentially invalid questionnaires; (4) choosing the same option consecutively more than 5 times, which raised concerns about invalidity; (5) unreasonable answers comprising ≥20% of the total answers, which indicated low reliability and invalid data.

Statistical Analysis

All data analyses were conducted using SPSS version 21.0 (IBM Corp). Continuous data are presented as mean (SD), and categorical variables are presented as counts and percentages. To compare means between the 2 groups, t tests were used when the data met the criteria of a normal distribution and homogeneity of variance. The Mann-Whitney rank sum test was used when the data did not meet the criteria of a normal distribution and homogeneity of variance. Chi-square tests were used to analyze categorical data. If the criteria of a normal distribution and homogeneity of variance were met, a 1-way ANOVA was used. When the data distribution was normal but the variance was not, the Dunnett T3 result was selected for multiple comparisons between 2 pairs in the group. If neither a normal distribution nor homogeneity of variance could be assumed, the Friedman rank sum test was used. A P<.05 was considered statistically significant.


Design and Development of the SG

Learning Path Composition

There are 4 elements that make up the learning path for the SGs: video-based online courses, self-test, game simulation, and record query. See Figure 3 for the interface diagrams of different elements.

Figure 3. Software interface of the simulation training system for tactical combat casualty care (TCCC) skills, including the (A) home page; (B) microcourses: TCCC summary, hemostasis, airway management and tension pneumothorax, fracture fixation, burn treatment, cycle management, eye trauma and craniocerebral trauma, painkillers and antibiotics; (C) self-assessment; (D) game simulation including 5 cases; (E) case background; (F) distant view of the casualty, the total score, the action to take, and score associated with that action; (G) close shot of the wound, with a task to choose the pain medication and the result of the choice; (H) standard rescue flow chart using patient 3 as the example; (I) record query including the evaluation records and case records.
Game Characteristics

The selection buttons were meticulously designed based on a logical tree graph that followed a sequential and deductive structure. Starting from the MARCH principle as the initial selection point, subsequent branches cascaded downward, clearly delineating the sequence of rescue measures. This approach facilitated understanding and memory retention of the learning content while enhancing the ability to make informed judgments in battlefield rescue situations. Moreover, the overall preview of the tree graph enabled operators to have a comprehensive view of all rescue measures, facilitating a better grasp of the overall rescue layout. The hierarchical structure of the selection buttons ensured an intuitive and efficient operation, aiding in organizing rescue thinking and making prompt rescue decisions.

The case base structure incorporated a compound injury setting, encompassing mass injuries of a battle group and multiple injuries of individual soldiers. The content of injuries was thoughtfully organized to present a reasonable gradient of “first aid + self-rescue,” ensuring a gradual learning effect. By addressing the complex treatment challenges associated with mass casualties, the system transcended the limitations of individual soldier skill training. This comprehensive approach not only honed triage, self-rescue, and mutual rescue abilities but also fostered the accumulation of experience in mass injury treatment among team members, ultimately enhancing overall rescue capabilities on the battlefield.

Deployment Evaluation With Targeted Users

Study Population

Complete data were available from 148 participants: 78 participants in the Control group and 70 participants in the Observation group. The demographic characteristics of the participants were similar between the 2 groups, as shown in Table 1.

Table 1. Participant characteristics (n=148).
VariablesControl group (n=78)Observation group (n=70)Chi-square (df)Z scoreP value
Gender, n (%)1.5 (1)a.21

Male40 (51)43 (61)



Female38 (49)27 (39)


Age (years), mean (Q1-Q3)30 (28-34)29 (25-33)–1.655.10
Major, n (%)3.3 (2).20

Medicine23 (30)29 (41)



Nursing35 (45)30 (43)



Logistics20 (25)11 (16)


Education, n (%)0.6 (3).89

Diploma14 (18)12 (17)



Bachelor’s degree42 (54)37 (53)



Master’s degree7 (9)9 (13)



Doctorate15 (19)12 (17)


Work duration (years), mean (Q1-Q3)8 (6-11)7 (4-10)–1.458.15
Professional title, n (%)2.5 (2).29

Junior36 (46)41 (59)



Intermediate33 (42)24 (34)



Senior9 (12)5 (7)


Received TCCCb training, n (%)3.7 (1).06

Yes71 (91)56 (80)



No7 (9)14 (20)


Participated in military exercises, n (%)0.4 (1).51

Yes52 (67)43 (61)



No26 (33)27 (39)


aNot applicable.

bTCCC: tactical combat casualty care.

Comparison of Indicators Between the 2 Groups
Theoretical-Level Comparison of the 2 Groups Before and After the Intervention

The results of intergroup comparisons showed no significant difference in the knowledge test scores between the 2 groups before the intervention (t146=1.605, P=.11). However, the Observation group had a higher score than the Control group after the intervention (t146=–2.324, P=.02). In terms of intragroup comparisons, the knowledge test scores in both groups were significantly higher after the intervention (Control group: t154=–6.010, P<.001; Observation group: t138=–7.867, P<.001); see Table 2 for details.

Table 2. Comparison of the theoretical level of the 2 groups.
GroupKnowledge test scoret test (df)

P value

Pre-intervention, mean (SD)aPostintervention, mean (SD)b

Control group (n=78)68.15 (13.47)78.77 (7.88)–6.01 (154)<.001
Observation group (n=70)64.23 (16.26)82.20 (10.04)–7.867 (138)<.001

aComparison between groups: t(146)=1.605, P=.11.

bComparison between groups: t(146)=–2.324, P=.02.

Box plots and frequency histograms of the knowledge test scores for the 148 participants before and after the intervention are shown in Figure 4A and Figure 4B. The average score before the intervention was 66.30 (SD 14.93) points, ranging from 28 points to 96 points. After the intervention, the average score was 80.39 (SD 9.10) points, ranging from 60 points to 100 points. According to the frequency histogram, there was a significant overall improvement in the theoretical level of the participants after the intervention (t294=–9.805, P<.001).

Figure 4. Knowledge test scores: (A) box plots, (B) frequency histogram.
Comparison of the Confidence Levels of the 2 Groups Before and After the Intervention

The results showed significant differences in the scores of 3 items after the intervention: item 4 “I have the confidence to prioritize the injuries” (P<.001), item 5 “I have the confidence to discern the changes of injury independently” (P<.001), and item 6 “I have the confidence to manage injuries independently” (P<.001). However, there was no statistically significant difference in the scores of the other items after the intervention (P=.13 to P=.34). Refer to Table 3 for more details.

Table 3. Average confidence scores compared between the 2 groups after the intervention.
Confidence itemsControl group (n=78), meanObservation group (n=70), mean
I have confidence in my basic knowledge.3.713.49
I have faith in the latest ideas and research progress.3.263.43
I have confidence I can assess the injury accurately.3.353.5
I have the confidence to prioritize the injuries.3.133.83
I have the confidence to independently discern the changes in the injuries.3.313.84
I have confidence to independently manage the injuries.3.424.06
I have the confidence to participate in the military exercises.4.054.19
I have the confidence to fulfill the duties and missions of the mobile medical logistics teams.4.224.33
Comparison of Satisfaction Between the 2 Groups After the Intervention

There were statistically significant differences in the scores of 3 items after the intervention: item 2 “Easy to use” (P<.001), item 5 “Arouse the learning interest of TCCC” (P=.01), and item 9 “Promote the combination of theory and practice” (P<.001). However, there was no significant difference in the scores of the other items (P=.08 to P=.72). The total satisfaction scores did not differ significantly between the 2 groups (P=.37). Refer to Table 4 for more details.

Table 4. Satisfaction scores of the two groups after the intervention.
Satisfaction itemsControl group (n=78), meanObservation group (n=70), mean
Vivid software interface3.913.87
Easy to use4.083.37
Appropriately difficult injury4.194.14
Prompt, effective answers44.09
Aroused interest in learning TCCCa4.034.39
Improved basic TCCC knowledge4.264.37
Increased the importance of TCCC for me4.14.27
Improved my reasoning ability within TCCC4.14.17
Promoted the combined use of theory and practice3.554.03
Ensured the study of TCCC serves a practical purpose4.234.29

aTCCC: tactical combat casualty care.

Comparison of Indicators in the Observation Group
Comparison of Scores and Operation Times Between 5 Cases

The distributions of the case scores (IQR 58.75) and operation times (IQR 179.25) for case 5 were the most concentrated, while the distributions of case scores (IQR 120) and operation times (IQR 405.5) for case 4 were the most dispersed. The box plot charts in Figure 5A and Figure 5B show the distribution of case scores and operation times, respectively, for the 5 cases.

Figure 5. Boxplots of the (A) operating times for the 5 cases and (B) scores of the 5 cases.

In terms of intergroup comparisons, there were significant differences in the scores for the 5 cases (P<.001) and in the operation times for the 5 cases (P<.001), indicating an overall increase in case scores and a decrease in operation times with increasing training. The comparisons of case scores and operation times in the Observation group are shown in Table 5.

Table 5. Comparison of scores and operation times within the Observation group.
IndexCase 1: Desert Shield, mean (SD)Case 2: Desert Storm, mean (SD)Case 3: Sword Guard, mean (SD)Case 4: Burning Balloon, mean (SD)Case 5: Conflict of China-Vietnam, mean (SD)Entire group, mean (SD)F (df)P value
Score240.00 (42.81)259.29 (36.45)302.07 (54.09)270.91 (62.50)322.79 (48.49)279.01 (57.69)31.36 (4,345)<.001
Operation time (seconds)967.43 (164.59)840.83 (176.05)803.91 (188.60)765.07 (238.32)727.50 (134.27)820.95 (200.32)17.66 (4,345)<.001
Intragroup Comparison of Case Scores and Operation Times

There were significant differences between the scores of the cases, except for the mean differences between case 2 and case 4 and between case 3 and case 5. See Table 6 for details. There were significant differences in operation times between case 1 and cases 2-5 and between case 2 and case 5 (all P<.001; as shown in Table 7).

Table 6. Intragroup comparisons of case scores.
GroupingsMean difference (SE)P value95% CI
Case 1 - Case 2–19.286 (6.720).05–38.40 to –0.17
Case 1 - Case 3–62.071 (8.245)<.001–85.53 to –38.61
Case 1 - Case 4–30.914 (9.054).009–56.71 to –5.12
Case 1 - Case 5–82.786 (7.731)<.001–104.77 to –60.80
Case 2 - Case–42.786 (7.796)<.001–65.00 to –20.58
Case 2 - Case 4–11.629 (8.648).86–36.30 to 13.05
Case 2 - Case 5–63.500 (7.250)<.001–84.14 to –42.86
Case 3 - Case 431.157 (9.879).023.06 to 59.25
Case 3 - Case 5–20.714 (8.682).17–45.40 to 3.97
Case 4 - Case 5–51.871 (9.455)<.001–78.77 to –24.97
Table 7. Intragroup comparisons of operation times.
GroupingMean difference (SE)P value95% CI
Case 1 - Case 2126.600 (28.806)<.00144.70 to 208.50
Case 1 - Case 3163.514 (29.919)<.00178.44 to 248.59
Case 1 - Case 4202.357 (34.617)<.001103.76 to 300.96
Case 1 - Case 5239.929 (25.388)<.001167.71 to 312.15
Case 2 - Case 336.914 (30.837).93–50.76 to 124.58
Case 2 - Case 475.757 (35.414).29–25.05 to 176.57
Case 2 - Case 5113.329 (26.463)<.00138.02 to 188.64
Case 3 - Case 438.843 (36.325).96–64.51 to 142.19
Case 3 - Case 576.414 (27.671).06–2.38 to 155.21
Case 4 - Case 537.571 (32.694).94–55.76 to 130.90

Principal Findings

This study established a novel simulation training system that met the need for decision-making in the training field of TCCC. First, Unreal Engine 4 and Virtual University Enterprises were used to develop 5 battlefield rescue simulation training scenarios, and 3D modeling was used to build simulated tactical scenarios. This enabled us to train TCCC skills in a tactical background.

Second, an integrated training mode of “online course, knowledge self-test, game simulation, and error review” was constructed, which provided a new training method to improve decision-making ability.

Comparison of the performance of the video teaching and game simulation showed that the latter was more effective at improving theoretical knowledge, self-confidence, and comprehensive abilities in TCCC.

Simulation Training Software: Stimulating Greater Initiative and Cost-Effectiveness

Currently, conducting simulation training for TCCC requires significant capital investment and time. Due to the lengthy training cycle of rotating personnel, regular and large-scale centralized training is challenging to implement. Although video teaching can be used for centralized training, it primarily provides passive learning, lacking in-depth professional knowledge and innovative thinking. Trainees may experience lower levels of interest and motivation, even though they may acquire more with a well-designed slide presentation [31]. The use of simulation training software presents an effective solution to address these challenges.

In simulation training software, trainees interact with “virtual wounded” individuals who provide real-time movement, voice, and expression feedback based on different rescue measures. The overall wound situation is presented through panoramic character shots and close-up shots of specific wounds. By simulating various rescue scenes and conducting detailed examinations of injuries, the software creates a 3D, immersive, interactive VR environment. This immersive experience and sense of presence have been identified as key factors for enhancing learning rates [32]. The interactive VR experiences [33] associated with immersive sensations and presence overcome the limitations of passive information reception in traditional teaching methods and promote learners’ active thinking and problem-solving abilities.

Although the initial development costs of SGs may be high, the expected benefits in terms of improved patient care and error prevention provide a compelling argument for investing in their development. First, SGs offer low management costs. SGs can assess the retention of procedural skills in a more practical, cost-effective manner. Additionally, SGs can accommodate an unlimited number of students, resulting in cost and time savings related to equipment maintenance and training teachers while maintaining a high level of cost efficiency. Second, SGs have a short development and update cycle. Similar to other e-learning applications, SGs allow the updating of cases and modification of content [2] based on feedback obtained through repeated use. Moreover, SGs enable cost-effective training for a larger population of trainees [34,35], with the flexibility of being available anytime and anywhere. In our study, participants were able to access the SG on their personal computers and tablets.

The Training Effect of the “Serious Game” Exceeds That of “Video Teaching”

As a teaching tool, evidence has shown VR improves learning outcomes, skill performance, cognitive performance, and knowledge retention [36]. In this study, SGs were more effective at improving trainees’ theoretical knowledge, self-confidence, and comprehensive abilities in TCCC. Therefore, SGs can be considered as the preferred training tool for simulation training.

Kirkpatrick Phase 1: Evaluation of Reaction

In this study, a software satisfaction questionnaire was used to evaluate the interactive graphical user interface and application effect. The overall satisfaction level was high, with no statistical difference in the total scores between the 2 groups (P=.37).

The comparison between the 2 groups revealed that the Control group reported lower scores in “Arouse the learning interest of TCCC” (t146=–2.806, P=.005) and “Promote the combination of theory and practice” (P<.001) than the Observation group. This is likely because the video-based learning mode resembled offline collective teaching [37], lacking interest, interaction, and the effective stimulation of learning enthusiasm, resulting in a significant satisfaction gap between the 2 groups. Only “Easy to use” scored lower in the Observation group than in the Control group (t146=–5.977, P<.001). This may be attributed to the clear structure of the selection buttons based on the logical thinking tree, allowing operators to preview the overall rescue options. However, the case database contains 5 first-level indicators and 36 second-level indicators, leading to potential confusion for learners with weak logical thinking abilities. Consistent with previous studies [20], trainees who experienced the VR environment reported higher levels of satisfaction. The software successfully enhanced learner enthusiasm and interest by providing well-organized content, appropriate difficulty levels, and valuable feedback, thereby indirectly improving overall learning outcomes.

Kirkpatrick Phase 2: Evaluation of Learning

The second level of the Kirkpatrick model involves learning assessment. Knowledge tests, self-confidence rating forms, and comparison of scores between the 2 groups were used to assess the level of knowledge and ability acquired by trainees.

Knowledge Level: TCCC Knowledge Test

Both groups showed significant improvements in theoretical scores from baseline to after the intervention, indicating a perceived increase in knowledge and clinical judgment, which is consistent with findings from other studies [38,39]. After the intervention, the Observation group achieved higher scores than the Control group (t146=–2.324, P=.02), suggesting that the SG self-learning method was statistically superior to the video-based self-learning method in terms of knowledge acquisition.

Self-Perception Level: Self-Confidence Rating Forms

Except for 3 items, the difference in self-confidence between the 2 groups was small. Although the sense of duty could not be improved through a single training session, the results suggested that software development should focus on injury judgment as the next step. Overall, there was a significant difference in total self-confidence scores between the 2 groups after training (P<.001). Intragroup and intergroup comparisons revealed that confidence scores in both groups improved from baseline to after training. Therefore, it can be confidently concluded that training enhances learners’ self-confidence. Research has shown that learners are very motivated to use SGs, because they are more engaging and interactive and provide more continuous feedback than traditional learning methods [40] or e-modules [41]. These results align with previous studies [9] reporting that SG training led to a significant increase in intrinsic motivation among participants.

Comparison of Indicators in the Observation Group

When comparing the scores between the 5 cases using box charts and histograms, the trainees’ mastery of rescue measures for massive hemorrhage, hemorrhagic shock, and closed fractures was consistently high, while mastery of burns and traumatic brain injury showed more variability. Although there was no significant difference in scores between the 2 cases (P=.86), the scores for traumatic brain injury fluctuated greatly, indicating a poor understanding of these knowledge points and the need for consolidation and intensive training to minimize differentiation and internalize knowledge.

Intragroup Comparisons

Regarding intragroup comparisons, the results suggested that the treatment level for tension pneumothorax (Case 1) was not as good as for other injuries. Tension pneumothorax is a special form of open pneumothorax with insidious clinical manifestations and no obvious symptoms other than dyspnea, leading to misjudgment. This indicates the need for further improvement in the trainees’ recognition of injuries and clinical logical reasoning skills.

The average operation time for the SG was 13.68 (SD 3.34) minutes, and the average case score was 279.01 (SD 57.69) points. Previous studies reported average operation times ranging from 195.09 (SD 72.03) to 350.00 (SD 108.36) seconds or 17 minutes to 25 minutes within training scenarios [42]. Therefore, the operation time in our study was deemed reasonable, ensuring adequate attention to each case. The average case score differed significantly from the full score of 500, with a score rate of only 44% (222/500 points) to 67% (336/500 points). These results indicate that the members of the mobile medical service team did not have a good grasp of TCCC knowledge, necessitating reteaching and retraining on the fundamentals of TCCC.

Conclusions

This study developed an innovative tool for TCCC training, complementing offline practical operations. It enhanced overall training effectiveness and provided a new method for training mobile medical logistics teams. The results showed that the “game simulation” achieved better training effects than the “microcourses” for theoretical knowledge, self-confidence, and comprehensive abilities.

The software successfully stimulated the learning enthusiasm and interest of trainees by providing well-organized content, appropriate difficulty levels, and valuable feedback. It improved the trainees’ knowledge levels, self-perception levels, and overall performance. The software enabled cost-effective training for a larger population of trainees, with the flexibility of being available anytime and anywhere.

However, the software still requires improvement in terms of wounded model development, wound spectrum representation, rescue equipment, and other aspects. Future research will focus on capturing behavioral changes and resulting benefits among learners. Further investigation will also involve multicenter verification, the exploration of additional human-computer interaction methods, and the creation of more natural and engaging intelligent user interfaces. These efforts aim to align with emerging trends and attract more young officers and soldiers to participate in TCCC simulation training.

The limitations of this study include the convenience sampling method, lack of long-term follow-up data, limited comparable context for the applied VR technology [42], hardware and software constraints, current deployment only on PCs and laptops, and the need to establish an extensive database.

Acknowledgments

The authors would like to express their gratitude to the head nurses at the Third Medical Center of Chinese People’s Liberation Army (PLA) General Hospital and the Health Service Training Center of the Chinese PLA General Hospital for their valuable assistance in organizing the study.

The study was funded by the health care research platform of the Army 14th five-year construction plan (LB2022B020200).

Data Availability

The data sets generated during and/or analyzed during this study are available from the corresponding author (YG) on reasonable request

Authors' Contributions

The study was designed by SZ, QY, and YG. Data collection was performed by SZ, YS, LK, and ZL. Data analysis was conducted by SZ and MY. Manuscript preparation was carried out by SZ and MY. Final approval for publication was granted by SZ, MY, ZL, QY, and YG.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Knowledge test.

DOC File , 48 KB

Multimedia Appendix 2

Self-confidence rating form.

DOC File , 36 KB

Multimedia Appendix 3

Satisfaction questionnaire.

DOC File , 42 KB

  1. Hamari J, Shernoff DJ, Rowe E, Coller B, Asbell-Clarke J, Edwards T. Challenging games help students learn: An empirical study on engagement, flow and immersion in game-based learning. Computers in Human Behavior. Jan 2016;54:170-179. [CrossRef]
  2. Gentry SV, Gauthier A, L'Estrade Ehrstrom B, Wortley D, Lilienthal A, Tudor Car L, et al. Serious gaming and gamification education in health professions: systematic review. J Med Internet Res. Mar 28, 2019;21(3):e12994. [FREE Full text] [CrossRef] [Medline]
  3. McGrath JL, Taekman JM, Dev P, Danforth DR, Mohan D, Kman N, et al. Using virtual reality simulation environments to assess competence for emergency medicine learners. Acad Emerg Med. Feb 2018;25(2):186-195. [FREE Full text] [CrossRef] [Medline]
  4. Boada I, Rodriguez Benitez A, Thió-Henestrosa S, Soler J. A serious game on the first-aid procedure in choking scenarios: design and evaluation study. JMIR Serious Games. Aug 19, 2020;8(3):e16655. [FREE Full text] [CrossRef] [Medline]
  5. Ricci S, Calandrino A, Borgonovo G, Chirico M, Casadio M. Viewpoint: virtual and augmented reality in basic and advanced life support training. JMIR Serious Games. Mar 23, 2022;10(1):e28595. [FREE Full text] [CrossRef] [Medline]
  6. Pasquier P, Mérat S, Malgras B, Petit L, Queran X, Bay C, et al. A serious game for massive training and assessment of French soldiers involved in forward combat casualty care (3D-SC1): development and deployment. JMIR Serious Games. May 18, 2016;4(1):e5. [FREE Full text] [CrossRef] [Medline]
  7. DeFalco JA, Rowe JP, Paquette L, Georgoulas-Sherry V, Brawner K, Mott BW, et al. Detecting and addressing frustration in a serious game for military training. Int J Artif Intell Educ. Sep 12, 2017;28(2):152-193. [CrossRef]
  8. Bowers FA, Prochnow DL. JTLS-JCATS federation support of emergency response training. 2004. Presented at: 2003 Winter Simulation Conference; December 7-10, 2003:1052-1060; New Orleans, LA. [CrossRef]
  9. Lerner D, Mohr S, Schild J, Göring M, Luiz T. An immersive multi-user virtual reality for emergency simulation training: usability study. JMIR Serious Games. Jul 31, 2020;8(3):e18822. [FREE Full text] [CrossRef] [Medline]
  10. de Sena DP, Fabrício DD, da Silva VD, Bodanese LC, Franco AR. Comparative evaluation of video-based on-line course versus serious game for training medical students in cardiopulmonary resuscitation: A randomised trial. PLoS One. 2019;14(4):e0214722. [FREE Full text] [CrossRef] [Medline]
  11. Kyaw BM, Saxena N, Posadzki P, Vseteckova J, Nikolaou CK, George PP, et al. Virtual reality for health professions education: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res. Jan 22, 2019;21(1):e12959. [FREE Full text] [CrossRef] [Medline]
  12. Planchon J, Vacher A, Comblet J, Rabatel E, Darses F, Mignon A, et al. Serious game training improves performance in combat life-saving interventions. Injury. Jan 2018;49(1):86-92. [CrossRef] [Medline]
  13. Gasteiger N, van der Veer SN, Wilson P, Dowding D. How, for whom, and in which contexts or conditions augmented and virtual reality training works in upskilling health care workers: realist synthesis. JMIR Serious Games. Feb 14, 2022;10(1):e31644. [FREE Full text] [CrossRef] [Medline]
  14. Hu H, Xiao Y, Li H. The effectiveness of a serious game versus online lectures for improving medical students' coronavirus disease 2019 knowledge. Games Health J. Apr 2021;10(2):139-144. [CrossRef] [Medline]
  15. Blanié A, Amorim M, Benhamou D. Comparative value of a simulation by gaming and a traditional teaching method to improve clinical reasoning skills necessary to detect patient deterioration: a randomized study in nursing students. BMC Med Educ. Feb 19, 2020;20(1):53. [FREE Full text] [CrossRef] [Medline]
  16. Berger J, Bawab N, De Mooij J, Sutter Widmer D, Szilas N, De Vriese C, et al. An open randomized controlled study comparing an online text-based scenario and a serious game by Belgian and Swiss pharmacy students. Curr Pharm Teach Learn. Mar 2018;10(3):267-276. [CrossRef] [Medline]
  17. Drummond D, Delval P, Abdenouri S, Truchot J, Ceccaldi P, Plaisance P, et al. Serious game versus online course for pretraining medical students before a simulation-based mastery learning course on cardiopulmonary resuscitation: A randomised controlled study. Eur J Anaesthesiol. Dec 2017;34(12):836-844. [CrossRef] [Medline]
  18. Olszewski AE, Wolbrink TA. Serious gaming in medical education: a proposed structured framework for game development. Simul Healthc. Aug 2017;12(4):240-253. [CrossRef] [Medline]
  19. Kononowicz AA, Zary N, Edelbring S, Corral J, Hege I. Virtual patients--what are we talking about? A framework to classify the meanings of the term in healthcare education. BMC Med Educ. Feb 01, 2015;15:11. [FREE Full text] [CrossRef] [Medline]
  20. Checa D, Miguel-Alonso I, Bustillo A. Immersive virtual-reality computer-assembly serious game to enhance autonomous learning. Virtual Real. Dec 23, 2021:1-18. [FREE Full text] [CrossRef] [Medline]
  21. Wijnen-Meijer M, Brandhuber T, Schneider A, Berberat PO. Implementing Kolb's experiential learning cycle by linking real experience, case-based discussion and simulation. J Med Educ Curric Dev. 2022;9:23821205221091511. [FREE Full text] [CrossRef] [Medline]
  22. Montgomery HR, Drew B, Torrisi J, Adams MG, Remley MA, Rich TA, et al. TCCC guidelines comprehensive review and edits 2020: TCCC guidelines change 20-05 01 November 2020. J Spec Oper Med. 2021;21(2):122-127. [CrossRef] [Medline]
  23. Paige JT, Arora S, Fernandez G, Seymour N. Debriefing 101: training faculty to promote learning in simulation-based training. Am J Surg. Jan 2015;209(1):126-131. [CrossRef] [Medline]
  24. Kim H. Statistical notes for clinical researchers: Sample size calculation 1. comparison of two independent sample means. Restor Dent Endod. Feb 2016;41(1):74-78. [FREE Full text] [CrossRef] [Medline]
  25. Singh N, Gunjan VK, Kadiyala R, Xin Q, Gadekallu TR. Performance evaluation of SeisTutor using cognitive intelligence-based "Kirkpatrick Model". Comput Intell Neurosci. 2022;2022:5092962. [FREE Full text] [CrossRef] [Medline]
  26. Praslova L. Adaptation of Kirkpatrick’s four level model of training criteria to assessment of learning outcomes and program evaluation in Higher Education. Educ Asse Eval Acc. May 25, 2010;22(3):215-225. [CrossRef]
  27. Jeffries PR, Rizzolo MA. Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site study. National League for Nursing. 2006. URL: https:/​/www.​nln.org/​docs/​default-source/​uploadedfiles/​professional-development-programs/​read-the-nln-laerdal-project-summary-report-pdf.​pdf [accessed 2024-06-26]
  28. Cooper S, Beauchamp A, Bogossian F, Bucknall T, Cant R, Devries B, et al. Managing patient deterioration: a protocol for enhancing undergraduate nursing students' competence through web-based simulation and feedback techniques. BMC Nurs. Sep 28, 2012;11(1):18. [FREE Full text] [CrossRef] [Medline]
  29. Petri G, Gresse von Wangenheim C. How games for computing education are evaluated? A systematic literature review. Computers & Education. Apr 2017;107:68-90. [CrossRef]
  30. Gaba DM. The future vision of simulation in health care. Qual Saf Health Care. Oct 2004;13 Suppl 1(Suppl 1):i2-10. [FREE Full text] [CrossRef] [Medline]
  31. Parong J, Mayer RE. Learning science in immersive virtual reality. Journal of Educational Psychology. Aug 2018;110(6):785-797. [CrossRef]
  32. Mikropoulos TA, Natsis A. Educational virtual environments: A ten-year review of empirical research (1999–2009). Computers & Education. Apr 2011;56(3):769-780. [CrossRef]
  33. Bhattacharjee D, Paul A, Kim J, Karthigaikumar P. An immersive learning model using evolutionary learning. Computers & Electrical Engineering. Jan 26, 2018;65(13):236-249. [FREE Full text] [CrossRef] [Medline]
  34. Jenkins DH, Cioffi WG, Cocanour CS, Davis KA, Fabian TC, Jurkovich GJ, et al. Coalition for National Trauma Research (CNTR). Position statement of the Coalition for National Trauma Research on the National Academies of Sciences, Engineering and Medicine report, a national trauma care system: integrating military and civilian trauma systems to achieve zero preventable deaths after injury. J Trauma Acute Care Surg. Nov 2016;81(5):816-818. [CrossRef] [Medline]
  35. Lesaffre X, Tourtier J, Violin Y, Frattini B, Rivet C, Stibbe O, et al. Remote damage control during the attacks on Paris: Lessons learned by the Paris Fire Brigade and evolutions in the rescue system. J Trauma Acute Care Surg. Jun 2017;82(6S Suppl 1):S107-S113. [CrossRef] [Medline]
  36. Choi J, Thompson CE, Choi J, Waddill CB, Choi S. Effectiveness of immersive virtual reality in nursing education. Nurse Educ. Oct 12, 2021;47(3):E57-E61. [CrossRef]
  37. Cooper S, Cant R, Bogossian F, Kinsman L, Bucknall T. Patient deterioration education: evaluation of face-to-face simulation and e-simulation approaches. Clinical Simulation in Nursing. Feb 2015;11(2):97-105. [CrossRef]
  38. Du W, Zhong X, Jia Y, Jiang R, Yang H, Ye Z, et al. A novel scenario-based, mixed-reality platform for training nontechnical skills of battlefield first aid: prospective interventional study. JMIR Serious Games. Dec 06, 2022;10(4):e40727. [FREE Full text] [CrossRef] [Medline]
  39. Fogg N, Kubin L, Wilson CE, Trinka M. Using virtual simulation to develop clinical judgment in undergraduate nursing students. Clinical Simulation in Nursing. Nov 2020;48(2):55-58. [CrossRef] [Medline]
  40. Dankbaar MEW, Alsma J, Jansen EEH, van Merrienboer JJG, van Saase JLCM, Schuit SCE. An experimental study on the effects of a simulation game on students' clinical cognitive skills and motivation. Adv Health Sci Educ Theory Pract. Aug 2016;21(3):505-521. [FREE Full text] [CrossRef] [Medline]
  41. Dankbaar MEW, Richters O, Kalkman CJ, Prins G, Ten Cate OTJ, van Merrienboer JJG, et al. Comparative effectiveness of a serious game and an e-module to support patient safety knowledge and awareness. BMC Med Educ. Feb 02, 2017;17(1):30. [FREE Full text] [CrossRef] [Medline]
  42. Jensen L, Konradsen F. A review of the use of virtual reality head-mounted displays in education and training. Educ Inf Technol. Nov 25, 2017;23(4):1515-1529. [CrossRef]


JTLS: Joint Theater Level Simulation
PLA: People’s Liberation Army
SG: serious game
TCCC: tactical combat casualty care
VR: virtual reality


Edited by T Leung; submitted 13.07.23; peer-reviewed by Q Fei, L Suppan, G Li; comments to author 10.10.23; revised version received 15.12.23; accepted 31.05.24; published 12.08.24.

Copyright

©Siyue Zhu, Zenan Li, Ying Sun, Linghui Kong, Ming Yin, Qinge Yong, Yuan Gao. Originally published in JMIR Formative Research (https://formative.jmir.org), 12.08.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.