Published on in Vol 7 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/42584, first published .
Reliability and Validity of the Arabic Version of the Game Experience Questionnaire: Pilot Questionnaire Study

Reliability and Validity of the Arabic Version of the Game Experience Questionnaire: Pilot Questionnaire Study

Reliability and Validity of the Arabic Version of the Game Experience Questionnaire: Pilot Questionnaire Study

Original Paper

1Department of Human Sciences, Higher Institute of Sport and Physical Education of Sfax, University of Sfax, Sfax, Tunisia

2Department of Human and Social Sciences, Higher Institute of Sport and Physical Education of Kef, University of Jendouba, Jendouba, Tunisia

3Postgraduate School of Public Health, Department of Health Sciences (DISSAL), University of Genoa, Genoa, Italy

4Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), University of Genoa, Genoa, Italy

5Department of Child Psychology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, National Children's Regional Medical Center, Hangzhou, Zhejiang, China

6Higher Institute of Sport and Physical Education of Gafsa, University of Gafsa, Gafsa, Tunisia

7Laboratory for Industrial and Applied Mathematics, Department of Mathematics and Statistics, York University, Toronto, ON, Canada

Corresponding Author:

Nicola Luigi Bragazzi, MPH, MD, PhD

Laboratory for Industrial and Applied Mathematics

Department of Mathematics and Statistics

York University

4700 Keele St

Toronto, ON, M3J 1P3

Canada

Phone: 1 416 736 2100 ext 66093

Email: robertobragazzi@gmail.com


Background: Nowadays, digital gaming occupies a central position in the entertainment industry where it has developed into a cherished kind of entertainment in markets all over the world. In addition, it provides other sectors with various social and economic benefits. The Game Experience Questionnaire (GEQ) is a free, quantitative, and comprehensive self-report measure that was developed to assess the player game experience. Despite having been widely used by many research projects in the past, it has not been adapted into Arabic. Furthermore, several components of the scale proved problematic from a psychometric point of view. Therefore, a modified version of the scale is needed to measure the gaming experience of the Arab population.

Objective: The aim of this study was to validate and examine the psychometrics of an adapted Arabic version of the GEQ in Tunisia.

Methods: A total of 771 volunteer participants completed an online survey, which included an Arabic version of the GEQ, gaming data, and a sociodemographic questionnaire. Subjects were randomized in order to complete two phases of the study: exploratory and confirmatory. The exploratory data were acquired from 360 respondents whose mean age was 23.89 (SD 2.29) years. Out of 360 respondents, 111 (30.8%) were female and 249 (69.2%) were male. Confirmatory data were obtained from the remaining 411 subjects whose mean age was 21.94 (SD 1.80) years. Out of 411 subjects, 169 (41.1%) were female and 242 (58.9%) were male.

Results: After the elimination of two items, the exploratory and the confirmatory factor analyses provided an adequate factor structure of the Arabic version of the GEQ. In addition, the internal consistency coefficients suggested the reliability of the instrument. Significant differences were revealed for three subcomponents: flow by age (η2=0.013, P=.002), gender (η2=0.007, P=.02), and game type (η2=0.03, P<.001). For competence (η2=0.01, P=.03) and immersion (η2=0.02, P=.01), significant differences were highlighted by the type of game. The discriminant and convergent validities of the instrument were supported by calculating the average variance extracted (AVE) and comparing the square roots of the AVE values to the correlation coefficients, respectively.

Conclusions: The Arabic adapted version of the GEQ is valid and reliable and can be administered to measure the game experience in Arab countries.

JMIR Form Res 2023;7:e42584

doi:10.2196/42584

Keywords



Nowadays, digital gaming occupies a central position in the entertainment industry, where it has developed into a cherished type of entertainment in markets all over the world. Recent studies showed that the number of players keeps increasing at a high rate; for instance, SteamDB, which is one of the biggest video game distribution platforms in the world, showed growth from 67 million monthly users in 2017 to 120 million in 2020 [1]. Moreover, a total of 2.7 billion players were recorded around the world by the end of 2020. This is an increase of more than 135 million players in comparison to the numbers reported in 2019 [2].

The impact that digital games have had on modern culture has resulted in their proliferation into many spheres of human existence, not only in the form of consumable media goods but also as cultures and in ways of living [3-5]. As a result of increased connectivity and processing power brought about by the proliferation of console, online, and mobile technologies, new types of games are emerging, such as advertising games, augmented reality games, and social media games [6-8]. These games have been developed for a multitude of different platforms, each via digital technologies that cater to different gaming needs and practices. These new types of games include, for example, social gaming, hardcore gaming, couch gaming, and e-sports [9-11].

New developments in technology and software go up against a yearning for the past that exists across player bases that span many generations and consist of people living wildly different lives [12,13]. A broad variety of different business models emerged so that companies could appeal to vastly diverse player bases. Each model alters the ways in which players may engage with digital games as individuals (eg, players, parents, and children) or collectives (eg, communities, networks, and subcultures), that is, the ways that players live their lives, the ways they think, and the ways they behave [14-16]. As emotions are a fundamental component of human behavior, it is possible that all types of games produce emotional experiences [17]. Thus, game experience is considered one of the most central targets in the development of any game [18].

Development of the game experience allows for the systematic and empirical evaluation of computer games, which helps in the conduct of player experience research by combining numerical recording of parameters (ie, physically from players and technically within entertainment software) and qualitative assessments of experience, including behavioral observations. It is possible to render a high-resolution image of the complex interactions driving gameplay and player experience [19-23].

Some self-report measures were developed and validated to assess the player game experience as “an ensemble made up of the player’s sensations, thoughts, feelings, actions, and meaning-making in a gameplay setting” [24,25].

Recent years have seen an increase in the awareness among game makers of the commercial worth of adapting games for the Arab audience [26]. One of the most vibrant and rapidly expanding gaming communities in the world can be found in the Middle East and North Africa [27].

To satisfy the requirements of the Arab gaming community and to broaden the markets available to game developers, a number of video game companies believe that it is essential to develop an evaluation tool in the Arabic language to evaluate the gaming experiences of players in countries where Arabic is the primary language.

Among the tools previously developed, the following instruments are multidimensional: the Game Engagement Questionnaire [28], the Temple Presence Inventory, and the Social Presence in Gaming Questionnaire [29]. However, no Arabic tool has been found to evaluate the game experience in countries where Arabic is the primary language.

The Core Elements of the Gaming Experience Questionnaire include the following two measurement scales [30]: the Motivations to Play in Online Games scale, which evaluates motivation as a trait [31], and the Game Experience Questionnaire (GEQ).

The Player Experience of Need Satisfaction (PENS) instrument [32] and the GEQ are two popular scales [33,34]. The PENS is based on self-determination theory [35], which defines how experiences meet universal requirements (ie, competence, autonomy, and relatedness). In contrast, the GEQ does not rely on any particular theory in its construction; instead, it is predicated on conceptual explanations of the player experience in addition to focus groups that were carried out with players of video games. In fact, assessment of player experience incorporates subjective psychological experiences and emotions. The aim is to examine a broad range of experience that encourages a person to desire to play a game, continue playing it, return to it, and promote it to others [36].

The GEQ is a multidimensional instrument that has been widely used; it was experimentally and conceptually anchored with high-quality questions. The instrument has been used extensively to examine a variety of playing styles in many research projects. The questionnaire is free to use, quantitative, and comprehensive. It was commonly employed to evaluate new games or playful systems.

The first edition of the GEQ consisted of 42 items that were split among seven different variables (ie, challenge, competence, flow, immersion, tension, positive affect, and negative affect). Subsequently, a 33-item version of the instrument was developed. However, the psychometrics of the GEQ were not tested and, therefore, lacked evidence of reliability and validity [36-38].

Moreover, in a systematic review that included 73 studies that used the GEQ, the factor structure of the instrument was criticized. Among the serious criticisms was the lack of psychometric verification; the items did not support a 7-factor structure and some factors were eliminated in the modified version [38].

In the same study and following the exploratory factor analysis (EFA), the overlapping of eight items was highlighted. Additionally, after the confirmatory factor analysis (CFA), the authors proposed a revised structure, in which negative affect, tension, and challenge reflected a single negativity factor; however, the three concepts cannot constitute a single factor. Indeed, challenge is a core element of gameplay in the majority of digital games [39] and is widely considered [38] to play a crucial role in the enjoyment of games [40]. In addition, tension includes feelings of annoyance, frustration, and pressure. On the other hand, negative affect summarizes feelings related to a bad mood and boredom, whereas positive affect includes feelings of happiness and enjoyment. In fact, some studies have discussed modifying the GEQ in some fashion, but the details of how and why these modifications were done were not given. Furthermore, modifying the GEQ in some manner has been referenced in several research studies, but the details of how and why the modifications were performed are rarely revealed.

Therefore, the purpose of this research is to validate an adapted Arabic version of the GEQ and assess the psychometric properties of the questionnaire.


Data Collection and Procedures

We collected cross-sectional data from an online survey. To distribute the questionnaire and reach the largest number of target population members, we used snowball sampling to collect information from Tunisian Facebook users. This strategy is increasingly used in research involving online recruitment. Invitations to complete an informed consent form sent from specified Gmail accounts were first posted in several Facebook groups. Following this, the respondents asked their friends to participate in the survey. Thereafter, we invited the subjects to register at the Health games website [41].

The creation of this environment allowed us to access users’ IP addresses, delete duplicates, and have a single response per user. However, for privacy and security reasons, no personal information was collected (eg, names, home addresses, and telephone numbers). In addition, this research adheres to the Checklist for Reporting Results of Internet E-Surveys [42].

Participants (N=771) spent an average of 72 (SD 15.2) minutes per week playing video games. However, they spent an average of 60.3 (SD 9.4) minutes per week playing their favorite video game. Participants’ favorite games spanned a wide variety of categories within the gaming community.

The largest proportion of subjects participated in action games (295/771, 38.3%), adventure games (175/771, 22.7%), online multiplayer games (119/771, 15.4%), and real-time strategy games (112/771, 14.5%), whereas the rest participated in other online games. Those recruited for the study were divided into two groups to conduct two exploratory and confirmatory studies and were asked to describe their experience of their favorite games:

  1. Exploratory data were collected from 360 subjects whose mean age was 23.89 (SD 2.29) years and who were randomly selected from the participants. Both female (n=111, 30.8%) and male (n=249, 69.2%) subjects were recruited.
  2. Confirmatory data were collected from 411 subjects (females: n=169, 41.1%; males: n=242, 58.9%) whose mean age was 21.94 (SD 1.80) years; subjects were divided into three grades and had different gaming experience.

The Game Experience Questionnaire

Following the multiple criticisms from a psychometric point of view on the items designed for the GEQ, we used an adapted Arabic version of the instrument after reformulating the items of the instrument. A focus group was formed by two university researchers specializing in psychology, a bilingual translator, and a game expert to translate and modify the questionnaire.

The objective of this step was to make a translation of the GEQ, to develop an initial version in Arabic, and to reformulate the problematic items. Hence, the initial version was translated, revised, and submitted to a back translation. During this procedure, the translation met the methodological criteria of transcultural validation [43,44]. In parallel, problematic items were identified from the literature and reformulated. As a result, these items were deleted and replaced by new items referring to concepts in the literature. The procedure used in the modification and adaptation of the instrument is presented in the flowchart in Multimedia Appendix 1.

The changes concerned seven items of the original questionnaire. As an example, for immersion, the item “It was aesthetically pleasing” was replaced with the item “I found it fantastic.” For flow, we modified two items: “I was fully occupied with the game” and “I was deeply concentrated in the game” were replaced with “I don’t see the time passing” and “I’m not worried about other people’s opinions,” respectively. Multimedia Appendix 2 summarizes the list of modified items, and Multimedia Appendix 3 highlights the Arabic version of the GEQ.

The version developed in Arabic was then subjected to a pilot test on a group of university students in physical education and sports (N=27). Eventually, an English-language translation was developed and accepted by the formed committee.

Immersion, flow, competence, positive affect, negative affect, tension, and challenge are the seven aspects that are measured by the Arabic version of the GEQ, which is a 33-item scale that measures the experience of game players across these seven categories. The items in the questionnaire are presented in the form of statements, and respondents are asked to rate those statements in order to reflect their level of satisfaction with the game. Responses are rated on a 5-point Likert scale, with the following anchors: 0 (“not at all”), 1 (“somewhat”), 2 (“moderately”), 3 (“fairly”), and 4 (“very”).

Ethics Statement

The Ethics Committee of the High Institute of Sports and Physical Education of Kef, University of Jendouba, Jendouba, Tunisia, approved this study (approval number: PHS-07/2022). The study was carried out in accordance with the legal norms of the Declaration of Helsinki and its revisions (2013).

Statistical Analysis

Skewness and kurtosis tests were used to explore data normality, whereas multivariate normality was assessed in the confirmatory phase. Data with asymmetry and kurtosis values over 7 and 3, respectively [45], were considered non-Gaussian and indicated poor psychometric sensitivity. The Mardia coefficient was used to examine multivariate normality and to find substantial deviations [46].

Unweighted least squares with direct Oblimin rotation were used for GEQ exploratory analysis. Thus, the polychoric correlation matrix was analyzed to extract factors. The sample adequacy was assessed using the Kaiser-Meyer-Olkin (KMO) statistic. Hair et al [47] stated that the KMO value must be larger than 0.60 to accept the factorial solution. The chi-square value of the Bartlett sphericity test was also computed. The factors were kept for eigenvalues greater than 1 and the scree plot. Moreover, items with factor loadings less than 0.5 were eliminated [48].

We performed the CFA with the maximum likelihood method to establish model parameter estimation. CFA goodness of fit is evaluated using a range of model fit indices. Model assessment included chi-square, chi-square/df, goodness-of-fit index (GFI), adjusted GFI (AGFI), comparative fit index (CFI), Tucker-Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). Because large samples impact the chi-square fit statistic, the ratio of the chi-square statistic to the df (ie, chi-square/df) is preferable. For GFI and AGFI, Hu and Bentler [49] recommend a critical value of 0.90 or higher to accept the model. CFI and TLI have a threshold value of 0.95 or higher. Moreover, SRMR less than 0.08 and RMSEA less than 0.08 suggest a reasonable fit.

The reliability of the instrument was examined by evaluating three internal consistency indices simultaneously—McDonald ω, Cronbach α, and Gutmann λ6—since the classical coefficient Cronbach α, which has been reported in the majority of studies [50], was criticized for developing multidimensional scales [51]. For the three indices, values of 0.90 and greater are an indicator of outstanding internal consistency, values of 0.80 to 0.90 are an indicator of good reliability, and values of 0.70 to 0.80 are acceptable values. Lower levels indicate that the internal consistency is unacceptable [52-54].

We used univariate variance analysis, with age as a covariate, with partial eta-squared as a magnitude of effect to compare subscale scores by game type and genre. According to Cohen [55], eta-squared values of less than 0.01 represent trivial effects, values between 0.01 and 0.06 represent medium effects, values between 0.06 and 0.14 represent large effects, and values that exceed 0.14 represent very large effects. For every significant difference revealed by the test, we performed a post hoc Bonferroni test.

Convergent validity was evaluated by calculating the average variance extracted (AVE). To confirm the convergent validity, AVE values should exceed 0.50 [56]. Discriminant validity was established by the Fornell-Larcker criterion [57]. This procedure consists of comparing the square roots of the AVE values with the correlation coefficients between latent constructs [56].

The relationship between instrument dimensions, game addiction, and mental health parameters was assessed by the Pearson correlation matrix. To examine these associations, we used low (<0.35), moderate (0.36-0.67), and strong (>0.67) thresholds for the correlation coefficients [58].

Statistical analyses of the GEQ scores were performed using free JASP software (version 0.16.3.0; The JASP Team) and the lavaan R package from RStudio (version 1.3.1093; RStudio, PBC).

Results with P values less than or equal to .05 were deemed statistically significant in all statistical analyses.


Overview

The 33 items of the GEQ were submitted to unweighted least squares EFA with Kaiser normalization and the direct Oblimin rotation method. Sampling adequacy was supported by the KMO value, which was equal to 0.83, and the Bartlett test of sphericity was significant (χ2528=4673.8, P<.001).

The results of the factorial solution suggested the elimination of two items (items 2 and 22; see factor loadings in Table 1) and the extraction of seven factors that explained 61.6% of the total variance.

The first three components extracted from the EFA were negative affect (eigenvalue=5.08), immersion (eigenvalue=3.78), and positive affect (eigenvalue=3.55), which explained 15.4%, 11.5%, and 10.7% of the total variance, respectively. Competence (eigenvalue=2.56), flow (eigenvalue=2.1), and challenge (eigenvalue=1.79) explained 7.7%, 6.1%, and 5.4% of the total variance, respectively. The last component was tension (eigenvalue=1.57), which explained 4.8% of the total variance.

As shown in Figure 1, the purpose of the cut function is to select factors with eigenvalues greater than 1. The collected data and the simulated data, which were generated by the JASP software, showed a 7-factor solution: the factors retained must be above the cutoff line perpendicular to the axis of the eigenvalues (intersection for eigenvalue=1).

Table 1. Descriptive statistics, normality assessment, and factor loadings (ie, lambda) of the exploratory sample analysis.
ItemScores, mean (SD)SkewnessKurtosisλ
12.97 (0.80)–0.820.990.79
22.75 (1.04)–0.830.32<0.5
33.00 (0.83)–0.59–0.130.66
43.03 (0.79)–0.901.420.77
53.04 (0.79)–0.861.140.78
63.03 (0.84)–0.770.530.76
72.97 (0.89)–0.51–0.510.69
82.93 (0.88)–0.670.230.68
92.94 (0.91)–0.65–0.180.75
103.01 (0.87)–0.840.490.76
112.78 (0.93)–0.51–0.360.79
122.87 (0.87)–0.590.200.80
132.95 (0.83)–0.60–0.040.69
142.79 (0.91)–0.35–0.670.74
152.74 (0.94)–0.40–0.590.82
162.08 (0.91)–0.71–0.260.79
172.15 (0.89)–0.880.050.81
182.19 (0.90)–1.030.330.82
192.00 (0.88)–0.70–0.130.82
202.16 (0.91)–0.84–0.200.85
212.74 (0.93)–0.46–0.480.79
222.44 (1.17)–0.52–0.61<0.5
232.68 (1.07)–0.43–0.860.81
242.69 (0.95)–0.44–0.620.79
252.63 (1.00)–0.44–0.390.74
262.49 (1.11)–0.41–0.770.75
272.23 (0.74)–1.061.520.85
282.24 (0.75)–1.071.430.78
292.21 (0.81)–0.691.930.76
302.86 (0.87)–0.48–0.140.81
312.78 (0.87)–0.32–0.320.81
322.76 (0.89)–0.530.090.77
332.84 (0.84)–0.31–0.530.75
Figure 1. Scree plot of the Arabic version of the Game Experience Questionnaire.

Internal Consistency

The consistency coefficient of the scale was high overall, which is a positive sign. For all subscales, McDonald ω, Cronbach α, and Guttman λ6 varied from acceptable to good (Table 2).

The highest values were demonstrated for negative affect (McDonald ω=0.89, Cronbach α=.89, and Guttman λ6=0.87). The smallest values were demonstrated for tension (McDonald ω=0.77, Cronbach α=.77, and Guttman λ6=0.70).

The average interitem correlations of the seven subscales ranged from 0.45 to 0.61 and confirmed the reliability of the GEQ instrument.

Table 2. Internal consistency of the Game Experience Questionnaire.
AspectMcDonald ωCronbach αGuttman λ6Average interitem correlation
Flow0.80.790.750.49
Competence0.81.800.780.45
Positive affect0.86.850.830.54
Negative affect0.89.890.870.61
Immersion0.85.850.820.53
Tension0.77.770.700.53
Challenge0.80.800.750.50

Confirmatory Factor Analysis

Before performing the CFA, our cross-sectional data were submitted for univariate and multivariate normality analysis. The item scores and the skewness and kurtosis normality coefficients are displayed in Table 3. In addition, we calculated the Mardia kurtosis and skewness for multivariate normality [59]. The skewness standardized coefficient β-hat was equal to 104.94 (P<.001), whereas the κ concentration parameter was equal to 1147.85 (P<.001). Since both P values were less than .05, the items did not follow a multivariate normal distribution and were considered ordinal.

The CFA gave a first-order model with a nonsignificant chi-square value (χ2413=454.3; χ2/df=1.1; P=.08) with adequate error: the RMSEA was 0.016 (90% CI 0-0.024) and the SRMR was 0.031 (Figure 2). Moreover, the AGFI value was 0.92 and the GFI value was 0.93. Finally, the CFI and TLI values were 0.995 and 0.994, respectively. These results showed that a first-order model fit the data well.

Table 3. Descriptive statistics and normality assessment of the confirmatory sample (n=411).
ItemGEQa score, mean (SD)SkewnessKurtosis
12.69 (0.98)–0.730.19
22.76 (1.00)–0.800.35
32.78 (0.99)–0.770.26
42.75 (0.97)–0.60–0.04
52.81 (1.02)–0.870.38
62.90 (0.96)–0.68–0.18
72.93 (0.91)–0.57–0.24
82.90 (0.93)–0.61–0.08
92.86 (0.94)–0.49–0.41
102.90 (0.88)–0.59–0.16
112.86 (0.93)–0.57–0.29
122.90 (0.94)–0.760.27
132.85 (0.90)–0.59–0.03
142.90 (0.98)–0.65–0.27
152.81 (0.98)–0.61–0.25
161.92 (1.06)–0.31–0.78
171.91 (1.04)–0.38–0.70
181.99 (1.04)–0.55–0.63
191.87 (1.02)–0.33–0.70
201.93 (1.04)–0.39–0.79
212.71 (1.11)–0.57–0.58
222.75 (1.04)–0.57–0.26
232.70 (1.11)–0.44–0.89
242.75 (1.05)–0.45–0.77
252.64 (1.05)–0.60–0.22
262.57 (1.11)–0.37–0.75
271.94 (1.09)–0.15–0.74
282.00 (1.13)–0.21–0.66
291.96 (1.11)–0.28–0.83
302.68 (0.99)–0.590.02
312.76 (1.03)–0.47–0.63
322.65 (1.00)–0.44–0.51
332.69 (1.03)–0.530.13

aGEQ: Game Experience Questionnaire.

Figure 2. The final confirmatory factor analysis of the Arabic Game Experience Questionnaire. Factor loadings ranged from 0.60 to 0.89. I: item.

Sensitivity of the Scale

Descriptive statistics of the Arabic GEQ subscales by gender and type of game are presented in Multimedia Appendix 4. Significant differences were revealed for flow by age (η2=0.013, P=.002), gender (η2=0.02, P<.001), and game type (η2=0.03, P<.001), with no interaction effect. Significant differences were highlighted by the type of game for competence (η2=0.01, P=.03) and immersion (η2=0.02, P=.01).

Concerning flow, the Bonferroni test, which compares scores according to the type of game, revealed a difference between action games and other games (P=.05). Similarly, for immersion, significant differences were demonstrated between adventure games and strategy games (P=.03) and between adventure games and other games (P=.02; Table 4).

Table 4. Summary of F tests calculated using type III sum of squares for Arabic Game Experience Questionnaire subscales.
AspectAge (df=1)aP valueGender (df=1)P valueType of game (df=4)P valueGender × type of game (df=4)P value
Flow

F test10.15.0025.41.025.78<.0011.02.40

Partial η20.0130.0070.0300.005
Competence

F test1.14.290.33.572.75.030.40.81

Partial η20.000.000.010.002
Positive affect

F test0.048.833.17.081.071.370.79.53

Partial η20.0000.0040.0060.004
Negative affect

F test0.13.720.33.570.09.990.70.60

Partial η20.0000.0000.0000.004
Immersion

F test0.83.360.64.423.25.012.04.09

Partial η20.0010.0010.0170.011
Tension

F test0.430.510.23.640.58.663.05.02

Partial η20.0010.0000.0030.016
Challenge

F test0.497.480.54.461.02.400.27.90

Partial η20.0010.0010.0050.001

aError df=760; total df=771.

Discriminant and Convergent Validity

The AVE values were all greater than 0.50. For flow and competence, the values were 0.68 and 0.53, respectively. For positive affect, the value was 0.62, and for negative affect, the value was 0.72. Finally, for immersion, tension, and challenge, the values were 0.71, 0.66, and 0.75, respectively.

The discriminant validity of the instrument was confirmed by the values of the square roots of the AVEs, which are presented diagonally in Table 5. All of these values were greater than the values of the correlations between the dimensions. According to these criteria, items should share a greater amount of variance with their intended underlying construct than with the other constructs. The shared variances between factors confirmed the discriminant validity.

Table 5. Discriminant validity of the Arabic Game Experience Questionnaire and correlation analysis (Pearson r and 2-tailed P value).
AspectFlowCompetencePositive affectNegative affectImmersionTensionChallenge
Flow

r0.82a0.27b0.45b–0.18b0.050.20b0.07

P valuec<.001<.001<.001.34<.001.16
Competence

r0.27b0.73a0.23b–0.090.02–0.080.02

P value<.001<.001.08.70.10.72
Positive affect

r0.45b0.23b0.79a–0.890.08–0.19b0.05

P value<.001<.001.07.88<.001.32
Negative affect

r–0.18b–0.09–0.090.85a–0.17b0.35b–0.01

P value<.001.08.07<.001<.001.87
Immersion

r0.050.020.01–0.17b0.84a0.010.02

P value.34.70.88<.001.90.70
Tension

r–0.20b–0.08–0.19b0.35b0.010.81a0.08

P value<.001.10<.001<.001.90.13
Challenge

r0.070.020.05–0.010.020.080.86a

P value.16.72.32.87.70.13

aSquare root of the average variance extracted.

bThe correlation is significant at a significance level of P<.001.

cNot applicable.


Principal Findings

The objective of this study was to evaluate the psychometric properties of an adapted Arabic-language version of the GEQ. The results of the principal factor analysis suggest the elimination of one item from the flow subscale and another item from the immersion subscale. Examination of the reliability by means of the three internal consistency coefficients confirmed the factorial solution of seven components and retained 31 items. Similarly, examination of the first-order model by means of CFA supported the structure of our adapted version.

Significant differences were revealed for flow by age, gender, and game type. For competence and immersion, significant differences were highlighted by the type of game and partially by the sensitivity of our tool.

Finally, the construct validity of the tool was established by convergent and discriminant validity.

In line with our results, the EFA, using Oblimin rotation, extracted seven factors explaining 62% of the total variance. However, the CFA and EFA results implied that the factor structure of the GEQ was inadequate and that many items will need to be dropped. This is in line with other studies [38]. In addition, the study showed that the challenge and negative affect components did not have adequate internal consistency. In addition, results from the CFA in this study suggested that the proposed items did not present an adequate 7-factor model. The CFA results suggested that the proposed model did not acceptably fit the data (χ2443=1582.0, P<.001; χ2/df=3.57; CFI=0.88; RMSEA=0.068). Law et al [38] concluded that the positive affect and immersion subscales were reliable; however, they suggested modifications to two flow items.

In convergence with these results, the EFA of the original scale showed various problematic items and suggested modification of the scale factors [37]. After the removal of several items and the fusion of subscales, they found that the CFA of the GEQ revealed poor model fit (RMSEA=0.062; CFI=0.834). However, the revised version of the GEQ with covariances showed an acceptable fit index (Multimedia Appendix 5). However, this work did not explain why they inferred the negative affect and tension subscales, as they are two separate concepts [60,61].

In fact, no study that examined the psychometrics has been interested in verifying the reliability of the original version of the instrument. Regarding this point, research suggested examining the reliability of measuring instruments. For measurement scales, reliability is generally calculated by internal consistency indices, test-retest reliability, or interrater reliability [62-65].

IJsselsteijn et al [33] included negative affect and tension in their GEQ. In contrast, recent work [66] has excluded negative features from the Gameful Experience Scale (GAMEX). In the GAMEX and in game research, these negative elements were described as emotional responses. However, Sabet et al [67] showed that the GEQ has a “forgiving effect” and that players can forgive or forget a bad experience if it coincides with a long duration of a pleasant experience. This demonstrated that negative affect could be present but camouflaged by the positive aspects of the game.

Our results revealed differences depending on the type of game. In parallel, Engl and Nacke [68] found significant interactions for gender, player type, and age in mobile gaming experience. Moreover, Quax et al [69] revealed differences in the experience of gamers according to four categories of games—action games, puzzle games, strategy games, and racing games—in terms of pleasure and frustration.

Our findings did not demonstrate any differences between single and multiplayer games. However, several game experience studies indicated that playing games against other people was more fun and more exciting than playing alone [70,71]. In line with our results, flow, immersion, and positive affect were often used by researchers as indicators of fun and enjoyment during gameplay [72]. In fact, Gajadhar et al [71] concluded that player experience measured by the GEQ is different for positive affect, skill, and tension between multiplayer gaming and gaming against the computer.

Few studies have examined whether gender and age affect gaming experience. Recent studies revealed that male players and younger players performed better in games [73]. As an example, using social theories, Chappetta and Barth [74] showed differences in how games were played and experienced by women compared to men.

However, due to the wide variety of games available, it was legitimate to assert that playing games did not provide a singular experience. Instead, a game might only be identified as useful once the player has participated in a variety of distinct situations, which highlights the fact that the gaming experience encompassed several dimensions. The impact that digital games have had on modern culture has resulted in their proliferation into many spheres of human existence.

Limitations of the Study

This study used online questionnaires, which have general and specific limitations. The sampling approach was nonprobability since study subjects were limited to Facebook users. The nonprobability nature of the survey was emphasized by the open invitation to participants. Participant self-selection made it difficult to identify nonresponse issues and generated an unrepresentative sample. To obtain generalizable results, a study must use probability sampling, a high-quality sampling frame, and enough follow-ups to improve response rates. Such research, in partnership with a service developer willing to provide a survey base of current and former users, would be preferred. Also, concerning the modifications made to the GEQ, the inverted items were absent in the instrument, which could be considered an additional limitation.

It should be noted that the GEQ manual suggested administering the GEQ soon after the conclusion of a game session. However, the duration between the end of the game and the administration of the questionnaire has not been verified. Also, the instrument has only been only validated on amateur players; professional players have not been taken into consideration.

Another limitation of the study concerns the sensitivity of the instrument, which was not examined. For further research, this scale should be administered in several Arab countries. In addition, the questionnaire must be tested on a larger sample of players and with a wider range of games.

Conclusions

EFA and CFA supported the GEQ’s structure. Internal consistency indicated the instrument’s reliability. However, the scale seems partially sensitive by game type. Convergent and discriminant validity indicated the tool’s construct validity. The instrument appeared to be a valid and reliable tool for assessing game experience in Arab countries.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Flow diagram of the Game Experience Questionnaire (GEQ) modification and translation.

PNG File , 138 KB

Multimedia Appendix 2

Modified items of the Game Experience Questionnaire (GEQ).

PDF File (Adobe PDF File), 164 KB

Multimedia Appendix 3

Arabic modified version of the Game Experience Questionnaire (GEQ).

PNG File , 158 KB

Multimedia Appendix 4

Descriptive statistics for Squares for the Arabic Game Experience Questionnaire (GEQ) Sub-scales by game type and Gender.

PDF File (Adobe PDF File), 205 KB

Multimedia Appendix 5

Comparison fit index of the Arabic Game Experience Questionnaire (GEQ) and the original version.

PDF File (Adobe PDF File), 428 KB

  1. SteamDB.   URL: https://steamdb.info/app/753/graphs [accessed 2022-12-13]
  2. Chen Y, Hsu C. Self-regulated mobile game-based English learning in a virtual reality environment. Comput Educ 2020 Sep;154:103910. [CrossRef]
  3. Kreissl J, Possler D, Klimmt C. Engagement with the gurus of gaming culture: Parasocial relationships to let’s players. Games Cult 2021 Apr 08;16(8):1021-1043. [CrossRef]
  4. Rahmatullah AS, Mulyasa E, Syahrani S, Pongpalilu F, Putri RE. Digital era 4.0. LingCuRe 2022 Jan 02;6:89-107. [CrossRef]
  5. Barr M, Copeland-Stewart A. Playing video games during the COVID-19 pandemic and effects on players’ well-being. Games Cult 2021 May 06;17(1):122-139. [CrossRef]
  6. Cheah I, Shimul AS, Phau I. Motivations of playing digital games: A review and research agenda. Psychol Mark 2021 Dec 28;39(5):937-950. [CrossRef]
  7. Peñaherrera-Pulla OS, Baena C, Fortes S, Baena E, Barco R. Measuring key quality indicators in cloud gaming: Framework and assessment over wireless networks. Sensors (Basel) 2021 Feb 17;21(4):1387 [FREE Full text] [CrossRef] [Medline]
  8. Stewart NK, Smith R. Networked students gaming together: Mobile scavenger hunts for online classrooms. Commun Teach 2022 Apr 18;37(1):7-13. [CrossRef]
  9. Jaramillo Álvarez MJ. The Community Is the Game: A Corporate Communication Proposal [doctoral thesis]. Quito, Ecuador: Universidad de Los Hemisferios; 2021.   URL: http://dspace.uhemisferios.edu.ec:8080/xmlui/handle/123456789/1240 [accessed 2022-12-13]
  10. Richardson I, Hjorth L, Davies H. Understanding Games and Game Cultures. London, UK: SAGE Publications Ltd; 2021.
  11. Snodgrass JG, Dengah HJF, Upadhyay C, Else RJ, Polzer E. Indian gaming zones as oppositional subculture. Curr Anthropol 2021 Dec 01;62(6):771-797. [CrossRef]
  12. Ponce J. An Endless Ladder: The Preservation of Digital Interactive Artworks [doctoral thesis]. San Diego, CA: University of California San Diego; 2022.   URL: https://escholarship.org/uc/item/71r4h2hv [accessed 2022-12-13]
  13. Stringfield J. Get in the Game: How to Level Up Your Business With Gaming, Esports, and Emerging Technologies. Hoboken, NJ: John Wiley & Sons, Inc; 2022.
  14. Wang Q, Ren H, Long J, Liu Y, Liu T. Research progress and debates on gaming disorder. Gen Psychiatr 2019;32(3):e100071 [FREE Full text] [CrossRef] [Medline]
  15. Wang Q, Liu L, Chen X. Evolutionary dynamics of cooperation in the public goods game with individual disguise and peer punishment. Dyn Games Appl 2019 Nov 29;10(3):764-782. [CrossRef]
  16. Badrinarayanan VA, Sierra JJ, Martin KM. A dual identification framework of online multiplayer video games: The case of massively multiplayer online role playing games (MMORPGs). J Bus Res 2015 May;68(5):1045-1052. [CrossRef]
  17. Ermi L, Mäyrä F. Players' emotional experiences with digital games. In: Proceedings of the 6th DAC Conference - Digital Experience: Design, Aesthetics, Practice. 2005 Presented at: 6th DAC Conference - Digital Experience: Design, Aesthetics, Practice; December 1-3, 2005; Copenhagen, Denmark   URL: https://homepages.tuni.fi/frans.mayra/Ermi-Mayra-DAC2005.pdf
  18. Örtqvist D, Liljedahl M. Immersion and gameplay experience: A contingency framework. Int J Comput Games Technol 2010;2010:613931. [CrossRef]
  19. Fernández Galeote D, Rajanen M, Rajanen D, Legaki N, Langley DJ, Hamari J. Gamification for climate change engagement: Review of corpus and future agenda. Environ Res Lett 2021 Jun 04;16(6):063004. [CrossRef]
  20. Oagaz H, Schoun B, Choi M. Performance improvement and skill transfer in table tennis through training in virtual reality. IEEE Trans Visual Comput Graphics 2022 Dec 1;28(12):4332-4343. [CrossRef]
  21. Park E, Lee S, Ham A, Choi M, Kim S, Lee B. Secrets of Gosu: Understanding physical combat skills of professional players in first-person shooters. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021 Presented at: 2021 CHI Conference on Human Factors in Computing Systems; May 8-13, 2021; Yokohama, Japan p. 1-14. [CrossRef]
  22. Schättin A, Häfliger S, Meyer A, Früh B, Böckler S, Hungerbühler Y, et al. Design and evaluation of user-centered exergames for patients with multiple sclerosis: Multilevel usability and feasibility studies. JMIR Serious Games 2021 May 07;9(2):e22826 [FREE Full text] [CrossRef] [Medline]
  23. Tao G, Garrett B, Taverner T, Cordingley E, Sun C. Immersive virtual reality health games: A narrative review of game design. J Neuroeng Rehabil 2021 Feb 11;18(1):31 [FREE Full text] [CrossRef] [Medline]
  24. Bontchev B, Antonova A, Terzieva V, Dankov Y. “Let Us Save Venice”—An educational online maze game for climate resilience. Sustainability 2021 Dec 21;14(1):7. [CrossRef]
  25. Vidergor HE. Effects of digital escape room on gameful experience, collaboration, and motivation of elementary school students. Comput Educ 2021 Jun;166:104156. [CrossRef]
  26. Al-Batineh M, Alawneh R. Current trends in localizing video games into Arabic: Localization levels and gamers’ preferences. Perspectives 2022;30(2):323-342. [CrossRef]
  27. Arezki R, Belhaj F, Shah P. Promoting a New Economy for the Middle East and North Africa. Washington, DC: World Bank; 2019.   URL: https://openknowledge.worldbank.org/handle/10986/31984 [accessed 2022-12-13]
  28. Brockmyer JH, Fox CM, Curtiss KA, McBroom E, Burkhart KM, Pidruzny JN. The development of the Game Engagement Questionnaire: A measure of engagement in video game-playing. J Exp Soc Psychol 2009 Jul;45(4):624-634. [CrossRef]
  29. de Kort YAW, IJsselsteijn WA, Poels K. Digital games as social presence technology: Development of the social presence in gaming questionnaire (SPGQ). In: Proceedings of the 10th Annual International Workshop on Presence. 2007 Presented at: The 10th Annual International Workshop on Presence; October 25-27, 2007; Barcelona, Spain p. 195-203.
  30. Calvillo-Gámez E, Cairns P, Cox A. Assessing the core elements of the gaming experience. In: Bernhaupt R, editor. Game User Experience Evaluation. Cham, Switzerland: Springer; 2015:37-62.
  31. Yee N. Motivations for play in online games. Cyberpsychol Behav 2006 Dec;9(6):772-775. [CrossRef] [Medline]
  32. Ryan RM, Rigby CS, Przybylski A. The motivational pull of video games: A self-determination theory approach. Motiv Emot 2006 Nov 29;30(4):344-360. [CrossRef]
  33. IJsselsteijn W, van den Hoogen W, Klimmt C, de Kort Y, Lindley C, Mathiak K, et al. Measuring the experience of digital game enjoyment. In: Proceedings of the 6th International Conference on Methods and Techniques in Behavioral Research. 2008 Presented at: The 6th International Conference on Methods and Techniques in Behavioral Research; August 26-29, 2008; Maastricht, The Netherlands p. 88-89.
  34. IJsselsteijn WA, de Kort YAW, Poels K. The Game Experience Questionnaire. Eindhoven, The Netherlands: Technische Universiteit Eindhoven; 2013.   URL: https://pure.tue.nl/ws/portalfiles/portal/21666907/Game_Experience_Questionnaire_English.pdf [accessed 2022-12-13]
  35. Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol 2000;55(1):68-78. [CrossRef]
  36. Norman KL. GEQ (Game Engagement/Experience Questionnaire): A review of two papers. Interact Comput 2013 Mar 08;25(4):278-283. [CrossRef]
  37. Johnson D, Gardner MJ, Perry R. Validation of two game experience scales: The Player Experience of Need Satisfaction (PENS) and Game Experience Questionnaire (GEQ). Int J Hum Comput Stud 2018 Oct;118:38-46. [CrossRef]
  38. Law ELC, Brühlmann F, Mekler ED. Systematic review and validation of the Game Experience Questionnaire (GEQ) - Implications for citation and reporting practice. In: Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. 2018 Presented at: The 2018 Annual Symposium on Computer-Human Interaction in Play; October 28-31, 2018; Melbourne, Australia p. 257-270. [CrossRef]
  39. Feil JH, Scattergood M. Beginning Game Level Design. Boston, MA: Course Technology Press; 2005.
  40. Vorderer P, Hartmann T, Klimmt C. Explaining the enjoyment of playing video games: The role of competition. In: Proceedings of the 2nd International Conference on Entertainment Computing. 2003 Presented at: The 2nd International Conference on Entertainment Computing; May 8-10, 2003; Pittsburgh, PA p. 1-9.
  41. Health games.   URL: https://health-games.net/ [accessed 2022-12-12]
  42. Eysenbach G. Improving the quality of web surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res 2004 Sep 29;6(3):e34 [FREE Full text] [CrossRef] [Medline]
  43. Epstein J, Santo RM, Guillemin F. A review of guidelines for cross-cultural adaptation of questionnaires could not bring out a consensus. J Clin Epidemiol 2015 Apr;68(4):435-441. [CrossRef] [Medline]
  44. Galanis P. Translation and cross-cultural adaptation methodology for questionnaires in languages other than Greek. Arch Hell Med 2019;36(1):124-135.
  45. Hancock G, Mueller R, editors. Structural Equation Modeling: A Second Course. Greenwich, CT: Information Age Publishing; 2013.
  46. Mardia KV. Applications of some measures of multivariate skewness and kurtosis in testing normality and robustness studies. Indian J Stat 1974;36:115-128.
  47. Hair Jr JF, Sarstedt M, Hopkins L, Kuppelwieser VG. Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research. Eur Bus Rev 2014;26(2):106-121. [CrossRef]
  48. Ware JE, Gandek B. Methods for testing data quality, scaling assumptions, and reliability. J Clin Epidemiol 1998 Nov;51(11):945-952. [CrossRef]
  49. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equ Modeling 1999 Jan;6(1):1-55. [CrossRef]
  50. McNeish D. Thanks coefficient alpha, we'll take it from here. Psychol Methods 2018 Sep;23(3):412-433. [CrossRef] [Medline]
  51. Hayes AF, Coutts JJ. Use omega rather than Cronbach’s alpha for estimating reliability. But…. Commun Methods Meas 2020;14(1):1-24.
  52. Tabachnick BG, Fidell LS. Using Multivariate Statistics. 5th edition. Boston, MA: Pearson; 2007.
  53. Bland JM, Altman DG. Matching. BMJ 1994 Oct 29;309(6962):1128 [FREE Full text] [CrossRef] [Medline]
  54. Sarstedt M, Ringle CM, Hair JF. Partial least squares structural equation modeling. In: Homburg C, Klarmann M, Vomberg A, editors. Handbook of Market Research. Cham, Switzerland: Springer; 2022:587-632.
  55. Cohen J. A power primer. Psychol Bull 1992 Jul;112(1):155-159 [FREE Full text] [CrossRef] [Medline]
  56. Hair Jr JF, Hult GTM, Ringle CM, Sarstedt M, Danks NP, Ray S. An introduction to structural equation modeling. In: Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R. Cham, Switzerland: Springer; 2021:1-29.
  57. Fornell C, Larcker DF. Evaluating structural equation models with unobservable variables and measurement error. J Mark Res 2018 Nov 28;18(1):39-50. [CrossRef]
  58. Taylor R. Interpretation of the correlation coefficient: A basic review. J Diagn Med Sonogr 2016 Jul 01;6(1):35-39. [CrossRef]
  59. Fletcher TD. QuantPsyc: Quantitative psychology tools, version 1.5. The Comprehensive R Archive Network. 2012.   URL: http://cran.nexr.com/web/packages/QuantPsyc/index.html [accessed 2022-12-13]
  60. Miller DJ, Vachon DD, Lynam DR. Neuroticism, negative affect, and negative affect instability: Establishing convergent and discriminant validity using ecological momentary assessment. Pers Individ Dif 2009 Dec 01;47(8):873-877 [FREE Full text] [CrossRef] [Medline]
  61. Lehne M, Koelsch S. Toward a general psychological model of tension and suspense. Front Psychol 2015;6:79 [FREE Full text] [CrossRef] [Medline]
  62. Dick W, Hagerty N. Topics in Measurement: Reliability and Validity. New York, NY: McGraw-Hill Book Company; 1971.
  63. Nahm AY, Rao SS, Solis-Galvan LE, Ragu-Nathan TS. The Q-sort method: Assessing reliability and construct validity of questionnaire items at a pre-testing stage. J Mod Appl Stat Methods 2002;1:114-125. [CrossRef]
  64. Polit DF. Assessing measurement in health: Beyond reliability and validity. Int J Nurs Stud 2015 Nov;52(11):1746-1753. [CrossRef] [Medline]
  65. Taherdoost H. Validity and reliability of the research instrument; how to test the validation of a questionnaire/survey in a research. SSRN J 2016;5(3):28-36. [CrossRef]
  66. Eppmann R, Bekk M, Klein K. Gameful experience in gamification: construction and validation of a gameful experience scale [GAMEX]. J Interact Market 2018 Aug;43:98-115. [CrossRef]
  67. Sabet S, Griwodz C, Möller S. Influence of primacy, recency and peak effects on the game experience questionnaire. In: Proceedings of the 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems. 2019 Presented at: The 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems; June 18, 2019; Amherst, MA p. 22-27. [CrossRef]
  68. Engl S, Nacke LE. Contextual influences on mobile player experience – A game user experience model. Entertain Comput 2013 Feb;4(1):83-91. [CrossRef]
  69. Quax P, Beznosyk A, Vanmontfort W, Marx R, Lamotte W. An evaluation of the impact of game genre on user experience in cloud gaming. In: Proceedings of the 2013 IEEE International Games Innovation Conference. 2013 Presented at: 2013 IEEE International Games Innovation Conference; September 23-25, 2013; Vancouver, BC p. 216-221. [CrossRef]
  70. Sweetser P, Johnson D, Wyeth P, Anwar A, Meng Y, Ozdowska A. GameFlow in different game genres and platforms. Comput Entertain 2017 Apr 04;15(3):1-24. [CrossRef]
  71. Gajadhar B, de Kort Y, IJsselsteijn W. Influence of social setting on player experience of digital games. In: Proceedings of the CHI Conference on Human Factors in Computing Systems. 2008 Presented at: The CHI Conference on Human Factors in Computing Systems; April 5-10, 2008; Florence, Italy p. 3099-3104. [CrossRef]
  72. Bowman ND, Oliver MB, Rogers R, Sherrick B, Woolley J, Chung MY. In control or in their shoes? How character attachment differentially influences video game enjoyment and appreciation. J Gaming Virtual Worlds 2016;8(1):83-99. [CrossRef]
  73. Melchers KG, Basch JM. Fair play? Sex‐, age‐, and job‐related correlates of performance in a computer‐based simulation game. Int J Sel Assess 2021 Jul 27;30(1):48-61. [CrossRef]
  74. Chappetta KC, Barth JM. Gaming roles versus gender roles in online gameplay. Inf Commun Soc 2020 May 18;25(2):162-183. [CrossRef]


AGFI: adjusted goodness-of-fit index
AVE: average variance extracted
CFA: confirmatory factor analysis
CFI: comparative fit index
EFA: exploratory factor analysis
GAMEX: Gameful Experience Scale
GEQ: Game Experience Questionnaire
GFI: goodness-of-fit index
KMO: Kaiser-Meyer-Olkin
PENS: Player Experience of Need Satisfaction
RMSEA: root mean square error of approximation
SRMR: standardized root mean square residual
TLI: Tucker-Lewis index


Edited by A Mavragani; submitted 09.09.22; peer-reviewed by N Turner, J Gamble; comments to author 24.10.22; revised version received 22.11.22; accepted 09.12.22; published 20.03.23

Copyright

©Mahmoud Rebhi, Mohamed Ben Aissa, Amayra Tannoubi, Mouna Saidane, Noomen Guelmami, Luca Puce, Wen Chen, Nasr Chalghaf, Fairouz Azaiez, Makrem Zghibi, Nicola Luigi Bragazzi. Originally published in JMIR Formative Research (https://formative.jmir.org), 20.03.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.