ArticlePDF Available


Gamification has attracted increased attention among organizations and human resource professionals recently, as a novel and promising concept for attracting and selecting prospective employees. In the current study, we explore the construct validity of a new gamified assessment method in employee selection that we developed following the situational judgement test (SJT) methodology. Our findings support the applicability of game elements into a traditional form of assessment built to assess candidates' soft skills. Specifically, our study contributes to research on gamification and employee selection exploring the construct validity of a gamified assessment method indicating that the psychometric properties of SJTs and their transformation into a gamified assessment are a suitable avenue for future research and practice in this field.
Int J Select Assess. 2019;1–13.  
© 2019 John Wiley & Sons Ltd
New technologies, such as gamification, game‐based assessments,
and serious games, have recently attracted increased attention
in the field of talent identification (Chamorro‐Premuzic, Akhtar,
Winsborough, & Sherman, 2017). Serious games are games de
signed and used for a primary goal other than entertainment
(Michael & Chen, 2005). In turn, gamification refers to the incorpo
ration of game elements into nongaming ac tivities in any context,
such as the workplace, giving birth to game‐based assessments,
which can be classified according to the level of game character
istics they employ from gamified assessments, such as multimedia
situational judgement test (SJTs) to different styles of games, such
as Candy Crush and Flight Simulator (Hawkes, Cek, & Handler,
2017 ).
Gamification has been applied to employee selection settings in
order to make assessment methods more game‐like improving thus
applicant reactions and possibly increase the prediction of job per‐
formance ( Armstrong, Ferrell, Collmus, & Landers, 2016a). However,
no published studies to date have established the effectiveness of
gamification in the recruitment and selection process. Therefore,
a question arises. Should researchers and professionals in Work/
Organizational Psychology and Human Resource Management be
interested in the use and effectiveness of gamified selec tion meth‐
ods? Gamified selection methods might improve hiring decisions.
For example, traditional methods used in employee selection make
two inferential leaps; one about applicants' rating on multiple‐choice
items measuring traits and competencies and the extent to which
they possess these traits or competencies, and another between
competencies and applicants' actual job performance, which gami
fied selection methods may not make (Fetzer, McNamara, & Geimer,
2017). Playing online gamified assessments might simulate situations
where individuals' intentions and behaviors are shown. Depending
on the type of the game design and element s used in assessments,
applicants' attention might be drifted from the fact that they are
evaluated, showcasing thus their true behaviors and, as a result, re‐
duce faking and/or social desirability biases (Armstrong, Landers, &
Collmus, 2016b). Therefore, gamified selection methods might re
duce the traditional methods' inferential leaps, improving thus the
prediction of job performance.
  Revised:2 2March2019 
DOI : 10.1111 /ij sa.1224 0
Gamification in employee selection: The development of a
gamified assessment
Konstantina Georgiou | Athanasios Gouras | Ioannis Nikolaou
Department of Management Science and
Technolog y, School of Business , Athens
University of Economics and Business,
Athens, Greece
Konstantina Georgiou, Department of
Managem ent Science and Technology,
School of Busines s, Athens University of
Economics and Business, 76, Patission Str.,
Athens GR10434, Greece .
Gamification has attracted increased attention among organizations and human re
source professionals recently, as a novel and promising concept for attracting and
selecting prospective employees. In the current study, we explore the construct va‐
lidity of a new gamified assessment method in employee selection that we developed
following the situational judgement test (SJT) methodology. Our findings support the
applicability of game elements into a traditional form of assessment built to assess
candidates' soft skills. Specifically, our study contributes to research on gamification
and employee selection exploring the construct validity of a gamified assessment
method indicating that the psychometric properties of SJTs and their transformation
into a gamified assessment are a suitable avenue for future research and practice in
this field.
employee selection, gamified assessment method, situational judgement test
   GEORGIOU Et al.
Since recent studies have dictated the applicability of SJTs in
high fidelity modes, such as video, multimedia, and interactive for‐
mats (Lievens & Sackett, 2006a) and gamified contexts (Armstrong,
Landers et al., 2016b), the purpose of our research is to explore
the development and construct validity of a SJT assessment that
has been subsequently converted into a gamified assessment.
Specifically, we used gamification to gamify a previous form of as‐
sessment that we initially developed (SJT). To achieve our goal, we
conducted two studies; the development and construct validity of a
SJT (Study 1), and the replication of the result s to a gamified version
of the SJT and its cross‐validation (Study 2).
Similarly to work sample and multimedia assessment tools, gamified
selection methods assess applicants' knowledge, skills, abilities, and
other characteristics (KSAOs), which are supported to predict job
performance (e.g., Lievens & De Soete, 2012; Schmidt & Hunter,
1998). Moreover, the use of gamified selection methods might lead
to increased engagement and positive perceptions of the organiza
tion signaling that it is at the cutting edge of technology offering
competitive advantage in the war of talent (Fetzer et al., 2017). Chow
and Chapman (2013) have recently claimed that gamification can be
used effectively in the recruitment process to attrac t a large number
of candidates, improve organizational image and attractiveness and,
as a result, positively affect applicants' job pursuit behaviors toward
an organization. Game elements might also improve the selection
process, since it is more difficult for test‐takers to fake the assess‐
ment, as desirable behaviors may be less obvious to individuals play‐
ing the game, and as a result, improve prediction of job performance
and hiring decisions (Armstrong, Landers et al., 2016b). This could be
especially the case for traditional selection methods, such as person‐
ality tests, which are prone to faking undermining thus their predic
tive validity (Murphy & Dzieweczynski, 2005). The gamification of
selection methods is also likely to improve performance prediction
by impeding information distortion and providing better quality in
formation about the test‐takers (Armstrong, Landers et al., 2016b).
However, this is not an inherent quality of gamification but it largely
depends on the type of gamification; candidates might be less likely
to identif y the correct or desirable answer and distort their answer,
either intentionally to inflate their scores or unintentionally to be
socially desirable (Richman, Kiesler, Weisband, & Drasgow, 1999).
Gamified assessment methods have also the potential to ex‐
tract information about candidates' behavior more accurately com‐
pared to personality inventories (Armstrong, Landers et al., 2016b).
Specifically, contrary to personality questionnaires, they do not in‐
clude self‐reported data. Instead of asking participants to indicate
their agreement to various statements, they can assess gameplay
behaviors to measure candidates' skills. These gameplay behaviors
can be job related, and as a result, they might predict future work be‐
havior more accurately than questionnaires (Armstrong, Landers et
al., 2016b). Furthermore, Armstrong, Ferrell et al. (2016a), recently
clarified that “a gamified assessment is not a stand‐alone game, but
it is instead an existing form of assessment that has been enhanced
with the addition of game elements” (p. 672). This pertains that gam‐
ified assessments reflect an advanced level of existing typical t ypes
of selection methods, a meta‐method that incrementally reinforces
the possibilities of increased job performance prediction (Lievens,
Peeters, & Schollaert, 2008).
Recently, different types of gamified assessment methods were
developed by various specialized companies, such as Owiwi and
Pymetrics, where others have focused on developing game‐based
assessments (e.g., Artic Shores and cut‐e) attrac ting increased in‐
terest and use among organizations globally (Nikolaou, Georgiou,
Bauer, & Truxillo, 2019). These gamified assessments might assess
an applicant's cognitive ability or judgment regarding a situation
encountered in the workplace. However, gamification types in em‐
ployee selection vary and can include various elements, either nar
rative, such as additional text to an online questionnaire, till highly
interactive game elements, such as avatars and digital rewards
(Armstrong, Ferrell et al., 2016a). For example, gamif ied assessments
might include virtual worlds sharing characteristics akin to work set‐
tings and avatars representing employees in order to assess candi‐
dates' skills and elicit job relevant behaviors (Laumer, Eckhardt, &
Weitzel, 2012). Nevertheless, more research is needed to test the
effectiveness of gamified assessment methods and est ablish valid
and robust theoretical underpinnings to confirm their applicability in
human resource management and employee selection settings.
On the other hand, research has already supported that SJTs
predict job‐related behaviors above cognitive ability and personal‐
ity tests (Lievens et al., 2008). SJTs tend to determine behavioral
tendencies, assessing how an individual will behave in a certain
situation and are assumed to measure job and situational knowl‐
edge (Motowidlo, Dunnette, & Carter, 1990; Motowidlo, Hooper,
& Jackson, 2006). Additionally, several scholars have concluded
that SJTs can tap into a variet y of constructs—ranging from prob‐
lem solving and decision‐making to interpersonal skills and they are
able to measure multiple constructs at the same time (e.g., Christian,
Edwards, & Bradley, 2010). Also recent research (Krumm et al., 2015;
Lievens & Motowidlo, 2016) indicated that more general domain
knowledge can be assessed by SJTs depending on the content of
situations developed, leaving space to researchers and practitioners
to better capture general sof t skills' performance and increase the
targeted audience administration.
Moreover, video technology has been successfully applied to
SJTs increasing their effectiveness (e.g., Olson‐Buchanan, Drasgow,
Weekley, & Ployhar t, 2006). To be more specific, the increased
fidelit y of presenting the situations in video format might lead to
higher predictive validity whereas increased realism might result in
favorable applicant reactions (Lievens & Sackett, 2006b). Oostrom,
Born, Serlie, and Molen (2010) supported that an open‐ended web
cam SJT, utilizing a webcam instead of a static video recorder to
capture the responses of participants, predicts job placement suc
cess. Rockstuhl, Ang, Ng, Lievens, and Dyne (2015) endeavored to
predict task per formance and interpersonal OCB by expanding the
traditional SJT paradigm to multimedia, implementing it across dif
ferent cultural samples. In both cases, additional game elements in
SJTs, such as a webcamera and video‐based vignettes, respectively,
contributed to better prediction of performance providing support
in this practice as a promising method for personnel selection. More
recently, Lievens (2017) suggested that webcam SJTs seem to be
a promising approach in understanding intra‐individual variability
in controlled settings, by combining the procedural knowledge and
the expressed behavior. It is also suggested that incorporating game
element s into an existing practice in HR might have higher return
on investment for an organization than developing a whole new
digital game (Landers, 2014). Considering the psychometric qual
ities of SJTs (McDaniel, Hartman, Whetzel, & Grubb, 2007) along
with the performance results when integrated with multimedia and
game elements, the gamification of SJTs seems to be an appropriate
method to follow. Armstrong, Ferrell et al. (2016a, p. 672), recently
emphasized the role of gamification as “especially valuable to prac
titioners in an era moving toward business‐to‐consumer (B2C) as
sessment models” which is highly applicable for our research. Taking
the above into consideration, we have chosen SJTs as the most ap
propriate methodology to develop and then, conver t it into a new
gamified assessment. To establish the effectiveness of the gamified
selection method, we will initially explore the construct validity of
a new SJT and the replication of the results to a gamified version of
the test.
Our aim was to gamify an assessment method that would support
organizations to map out prospective employees' sof t skills. We first
need to identify the most common core competencies and skills organ
izations of ten seek from their employees, especially when recruiting in
graduate trainee and entry‐level positions. For example, adaptability,
flexibility, learning agility, knowledge breadth, and multicultural per
spective have often been described as key competencies for employ
ability across several stakeholder groups (e.g., Gray, 2016, “The digital
future of work”, 2017; Robles, 2012). Moreover, among the most com
mon skills that individuals may use in several job positions are decision‐
making, flexibility, and ability to work under pressure, whereas on the
other hand, employers often face diff iculty in locating young graduates
possessing soft skills, such as resilience and teamwork (Clarke, 2016).
Following an extensive search of the literature and research on gradu
ate employability, we selected four of the skills that seemed to become
more and more relevant in today's demanding work environments (re
silience, adaptability, flexibility, and decision‐making) to form initially
the SJT and subsequently the gamified assessment's dimensions.
We believe that these skills, which have been identified as key
transferable soft skills integral to graduate employability (Andrews &
Higson, 2008), are more suitable to be assessed through a gamified
assessment than traditional selection methods, such as interviewing
or psychometric tests. Moreover, many authors claim that the diffi‐
culty in transferring and assessing soft skills, compared to hard skills
(e.g., technical or business‐related knowledge and skills), results in
increased waste of time and money for organizations (e.g., Laker &
Powell, 2011), accounting for our focus on soft skills and need to
use an assessment that may provide better quality information about
candidates' behavior on the job. For example, Kubisiak, Stewart,
Thornbury, and Moye (2014) employed self‐report surveys to assess
willingness to learn and a gamified simulation to assess ability to
learn, concluding that a gamified assessment can be used to assess
predictor constructs in a selection contex t, where survey method‐
ology may not be adequate. Similarly, since resilience, ability, flexi‐
bility, and decision‐making do not address intentions but behaviors,
a gamified assessment might be better employed to measure these
important attributes among job applicants.
Subsequently, we chose the type of gamification to employ.
There is still limited research in human resources management and
work/organizational psychology literature on gamification in selec
tion and assessment but there are recommendations for researchers
to approach gamified assessment addressing which game elements
might affect and in what way the assessment outcomes (Armstrong,
Ferrell et al., 2016a). Drawing from the taxonomy of gamification
elements for use in educational contexts (Dicheva, Dichev, Agre,
& Angelova, 2015), we gamified the SJT assessment in respect to
the following gamification design principles: engagement, feedback,
progress, freedom of choice, and storytelling. Although there are
fundamental differences between game‐based learning and gami
fied assessments, as the objective in learning is to motivate not to
measure, the common gamification principles of game‐based learn
ing might also be appropriate for selection (Hawkes et al., 2017).
Dicheva et al. (2015) reviewed previous studies on the application
of gamification in education and mapped the contex t of application
and game elements used. The game elements are conceptualized as
the gamification design principles with the game mechanics that are
typically used to implement them. For example, the game mecha
nisms that are used for the principles of engagement and feedback
might be avatars (e.g., Deterding, Björk, Nacke, Dixon, & Lawley,
2013), immediate rewards instead of vague long‐term benefits (e.g.,
Zichermann & Cunningham, 2011), and immediate or cycles of feed‐
back (e.g., Nah, Zeng, Telaprolu, Ayyappa, & Eschenbrenner, 2014).
In addition, the progress principle is achieved by using a progress bar
or points and levels (e.g., Zichermann & Cunningham, 2011), while
story telling by using avatars (e.g., Nah et al., 2014), visual and voice
overs. Finally, among the most common gamification design prin
ciples in educational settings, is freedom of choice (Dicheva et al.,
2015), which in a gamified assessment, may relate to how players
interact with the game as well as to other choices players may make,
for example, whether they can skip a level, leave the game at any
time, save it and return later, and so on (Hawkes et al., 2017).
We gamified the SJT assessment by using game mechanics that
serve those principles. For example, in the beginning of the assess‐
ment, test‐takers select a play hero/avatar. Every crew member
appearing in the gamified assessment has a backstory. The story
   GEORGIOU Et al.
follows the journey of play heroes in four islands, one for each soft
skill assessed. Stor ytelling/narration takes place using visual and
voice overs while playing the “game.” We employed narration and
fantasy in the gamified assessment to bring in engagement, mean‐
ing and clear calls to action, showing test‐takers how to get on a
path, in other words, to respond to scenarios. We could have used
narration reflecting the real word but this could possibly have not
the emotional advantages of fantasy and adventurous stories that
keep people engaged. On the other hand, according to Malone and
Lepper (1987), fantasy is described as one of the key reasons users
appreciate in a game and as one of the most impor tant features of
games raising a player's imagination.
There is also a visual progression bar showing the progress in
the assessment as well as story troubleshooting mechanisms and
voice overs to remind users what the interface does and how to
play the “game.” There is also a world map showing the islands the
players progress through. Rewards given to test‐takers are intrin
sic, by successfully completing the missions/solving the scenarios,
and extrinsic, by receiving a report including feedback on player's
competencies following the completion of the “game.” Test tak
ers have freedom of choice as well as they choose their avatar,
they can skip the narrative and can leave the “game” anytime and
continue from where they left of f. Finally, a fine balance is kept
between assessment and game mechanics to make it as fun and
engaging as possible but without alienating nongamers from the
experience and discriminate against them by making it fair and pro
viding equal oppor tunities to all. In an adventure story setting, it
might be more difficult to ascer tain the context of the question
making the candidate think twice and to respond with a more rep
resentative answer, while candidates' interest in the assessment
might be increased.
4.1 | Samples
Initially, 20 experienced HR professionals in employee selec
tion and assessment from various hierarchical levels (directors,
managers, and recruiters), based in Athens, Greece, were inter
viewed during the development phase of the SJT. Also, seven
HR professionals ser ved as experts to determine the scoring
key of the new SJT. For face validation purposes, another group
of eight HR practitioners completed the SJT. Additionally, 321
business schools' students and graduates (61% female) with a
mean age of 26.5 years old (SD: 5.4 years) and educational level:
42% bachelor's degree, 41% master's degree, were employed as
the construct validity and confirmatory factor analysis sample.
For conducting the replication of the gamified SJT, we gathered
410 employees or job seekers (46% female) on top of the 321
test takers of the previous step, with an average age of 29 years
old (SD: 7.4 years), 72% of which had a bachelor's or master's
4.2 | Measures
4.2.1 | SJT measurement
Twenty‐five scenarios accompanied with four response options
describing (a) Resilience, (b) Adaptability, (c) Flexibility, and (d)
Decision‐Making situations have been developed. Each scenario is
accompanied by a scoring key, indicating the correct, wrong, and
neutral alternatives. The participant should indicate which alterna
tive serves as correct and wrong in each situation. Every correct
choicegave+1pointtothetesttakerand−1for thewrongchoice,
0 points were given to the other two options. Each participant re‐
ceived four separate scores, one for each scale, which derives from
summing up the individual scenario scores. A sample scenario of the
SJT is presented in the Appendix.
In order to explore the construct validity of the SJT measure,
assessing the four constructs, we used the following measures. | Resilience
We used the Resilience Scale of Wagnild and Young (1993) which
contains 25 items, all of which are measured on a 7‐point scale from
1 (strongly disagree) to 7 (strongly agree). An example item is: “When
I make plans I follow through with them.” The alpha reliability of the
scale was 0.89. | Adaptability
We used the scale developed by Martin, Nejad, Colmar, and Liem
(2012) consisting of nine items. Each item is measured on a 1
(“strongly disagree”) to 7 (“strongly agree”) scale. An example item is:
“I am able to think through a number of possible options to assist me
in a new situation.” The alpha reliability of the scale was 0.89. | Flexibility
Flexibility was measured using the HEX ACO Personality Inventory
(Lee & Ashton, 200 4), which contains 10 items measured on a 5‐
point scale, from 1 (“strongly disagree”) to 5 (“strongly agree”) scale.
An example item is: “I react strongly to criticism.” The alpha reliabil‐
ity of the scale was 0.74. | Decision‐making
For the assessment of decision‐making skills, researchers adopted
Mincemoyer and Perkin's (2003) measure which assesses factors,
such as “define the problem; generate alternatives; check risks and
consequences of choices; selec t an alternative; and evaluate the
decision.” The response categor y for each question was a 5‐point
Likert‐type scale (1 = never to 5 = always) designed to determine
frequency of use. An example item is: “I easily identify my problem.”
The alpha reliabilit y of the scale was 0.77.
4.3 | Procedure
We developed a SJT assessing the four competencies (resilience,
adaptability, flexibility and decision‐making) following the guidelines
suggested by Motowidlo et al. (1990). The content of SJT's situa
tions and response options was first developed followed by an itera
tive procedure of face validation and construct validity assessment .
At this stage, the SJT's scenarios along with measures of Resilience,
Adaptability, Flexibility, and Decision‐Making have been administered
to business school students and graduates. Then, the SJT's scenarios
were converted into adventure scenarios around a common story by
a English‐speaking professional writer. The professional writer con
verted the four competencies into “islands of adventure,” then, au
thors thoroughly examined the content of the converted scenarios
to ensure correspondence. A sample scenario of the gamified assess
ment is presented in the Appendix, where the players wonder around
the islands responding to how they would most likely and least likely
behave to par ticular instances. The mission of the gamified assess
ment is to respond to all situations/scenarios by indicating what is
most likely and least likely to do. Having established a robust SJT
measurement and a gamified equivalent , we proceeded to first ensure
the const ruct validity of the gamif ied SJT and second to verify the lack
of systematic variance between the two different modes of testing
(SJT and gamified SJT). Therefore, we administered the gamified SJT
to employees and job seekers for validation purposes. As a result, a
fully functional gamified selection and assessment approach has been
developed, which has been transferred in an online platform.
5.1 | Item generation and content validation
Based on the critical incident methodology and experts' responses
and also in retrospect with scenario's writing, four response options
have been developed for each particular scenario, that elicit how
the test taker will behave in each single situation (which took the
form of “most likely would” and “least likely would”). We also em‐
ployed subject matter exper ts' (SMEs) scoring approach (Bergman,
Drasgow, Donovan, & Henning, 2006), which asks exper ts' opinion
about which is the best and least likely response in each scenario.
Following the formal procedure of Haynes, Richard, and Kubany
(1995), to further va lidate the content of th e SJT, an empiri cal method
ology c alled hit ratio analysis, initiated by Moore and Benbasat (1991),
was performed. Exper ts indicated that six scenarios in Resilience,
seven scenarios in Adaptabilit y, six scenarios in Flexibility, and six sce
narios in Decision‐Making have survived after receiving recurrent re
finements in terms of clarity and grammar. At the final stage, experts
and researchers seem to reach acceptable levels of congruence af ter
the procedure of content validation (ICC = 0.72, <0.01) in the 25‐item
SJT format, following the guidelines of Cicchetti (1994).
5.2 | Construct validity of the SJT
To ensure the results have not been influenced by Type II error, we
performed a series of hierarchical linear regressions using the SJT
facets as dependent variables, controlling for age and gender. The
results presented in Table 1 provide evidence of convergent and dis‐
criminant validity of the SJT.
More specifically, the resilience SJT facet is related to the resil‐
ience scale at a significant but mediocre level (β = 0.350, p < 0.01),
and so do the decision‐making scale (β = 0.104, p < 0.05) and flexibil‐
ity (β=−0.140,p < 0.05). On the other hand, the SJT flexibility mea‐
surement regression coefficients are statistically significant with the
HEXACO personality inventory measuring flexibilit y (β = 0.366, p <
0.01) and adaptability scale (β = 0.166, p < 0.05). SJT Adaptability is
related only to the adaptability scale (β = 0.166, p < 0.01) and the SJT
Decision‐Making facet with decision‐making (β = 0.389, p < 0.01),
flexibility (β = −0.114,p < 0.05) and resilience (β = 0.202, p < 0.01)
scales, respec tively. Some of the facet s of SJT are cross‐correlated
with other measurements; however, the magnitude is low sufficing
evidence for discriminant validity. To further establish convergent
validity on the same sample (N = 321), we conducted CFA (Bentler,
2004) with maximum likelihood estimation and robust statistics to
address nonnormality of dat a and fit indexes, as recommended by
Hu and Bentler (1999). More specifically, a value of >0.90 for com‐
parative fit index (CFI) and normed fit index (NFI) and a value of
<0.05 for root mean square error of approximation (RMSE A) indicate
a well‐fitting model according to researchers. With the exception of
the SJT Decision‐Making and the respec tive scale, all other models
present marginal though acceptable fit with the data (Table 2), satis‐
fying thus to an extend the criteria for convergent validity.
The factor correlations in each specific model, ranging from
0.290 to 0.378 at a statistically significant level, provide evidence
of convergence. Although fit is not strongly supported by these
particular CFA models, RMSEA 90% CI at all cases is within the
acceptable limits (MacCallum, Browne, & Sugawara, 1996), thus
providing evidence for affirmative measurement even though
chi‐square and CFI are marginally accepted (Chen, Curran, Bollen,
Kirby, & Paxton, 2008; Kenny & McCoach, 2003). Furthermore,
according to Bagozzi, Yi, and Phillips (1991), CFA models can be
utilized to address convergent and discriminant validity more ef
fectively than Campbell and Fiske (1959) procedures and criteria
and although model fit is one of the criteria to satisfy construct
validity, it is not the most significant one. The reason is that some
CFA criteria (i.e., χ2) may be distorted due to small sample size and
falsely neglect the actual correlation and covariance between the
traits under investigation.
The platform hosting the gamified assessment has been released
and 410 participants, voluntarily completed the on‐line version of
it (mainly employees or job seekers). To further establish the con‐
struct validit y of the new, gamified version of the test, the authors
picked a subsample of test‐takers (mean age: 27,6, SD: 4,6) who had
completed both the SJT and also played the gamified version of it
   GEORGIOU Et al.
TABLE 1 Hierarchical linear regressions (N = 321) with SJT facets as dependent variables (Scales: independent variables)
Dependent variable: Resilience (SJT) Dependent variable: Adaptability (SJT)
R2ΔR2F change BSig R2ΔR2F change BSig
Step 1 Step 1
Gender 0.215 0.215 3,498*  0.092 0.335 Gender 0.069 0.069 4,710 0.17 2 0.035*
Age −0 .120 0 .117 Age 0.034 0.543
Step 2 Step 2
Age 0. 311 0.096 5,393*−0.094 0.021*Adaptability Scale 0.13 4 0.106 7, 5 79 0.166 0.000**
Resilience Scale 0.350 0.000**
Flexibility Scale −0.140 0.041*
Decision‐Making Scale 0.104 0.045*
Dependent variable: Flexibility (SJT) Dependent variable: Decision‐Making (SJT)
R2ΔR2F change BSig R2ΔR2F change BSig
Step 1 Step 1
Gender 0.087 0.087 1,14 4 0.069 0.227 Gender 0.067 0.067 1,154 0.032 0.458
Age 0.062 0.272 A ge 0.016 0.670
Step 2 Step 2
Flexibility Scale 0.281 0 .106 7,15 7 0.366 0.000**Decision‐Making
0.275 0.208 7,79 3 0.389 0.000**
Adaptability Scale 0.166 0.051 Resilience Scale 0.202 0.000**
Flexibility Scale −0.114 0.047*
Note. Table reports standardized bet a coefficient s; Resilience, Flexibility, Adaptability, and Decision‐Making Scales as independent variables.
*Correlation is significant at the 0.05 level (two‐tailed). **Correlation is significant at the 0.01 level (two‐tailed).
(N = 97). Over this small sample, we performed linear regressions
using as independent variables the set of well‐established measure
ments and self‐reports provided by the common subsample of test‐
takers. The results, after controlling for age and gender, revealed
significant associations with the corresponding scales. Indicatively,
the resilience facet in the gamified test is related to the measure of
resilience (β = 0.565, p < 0.01), the adaptability facet to adaptability
scale ( β = 0.528, p < 0.01), and HE XACO scale (β = 0.187, p < 0.05),
the flexibility facet to HEXACO flexibility and flexibility scale (β =
0.552 , p < 0.01 and β = 0.211, p < 0.05, respectively) and the deci‐
sion‐making facet to the decision‐making scale (β = 0.450, p < 0.01).
Even though there are cross‐loadings in some cases (i.e., flexibility
and adaptability) their magnitude is small. It should be noted that the
sample size is small and neither CFA or path analysis techniques are
applicable due to potential identification errors (Kline, 2005).
To remedy this, a subsequent confirmatory factor analysis
has been performed to both confirm the appropriateness of the
test structure and get further insights into discriminant validation
(N = 410). The results showed good fit to data (Satorra–Bentler
Scaled χ2 [26 9, N = 410] = 306.94, p = 0.05; CFI = 0.91; NNFI = 0.89;
IFI = 0.91; RMSEA = 0.019; RMSEA 90% interval [0.000, 0.027]),
with statistically significant coefficients' estimates ranging from
0.141 to 0.607 and zero covariances between dependent variables
(i.e., constructs). This is an indication, according to Bagozzi et al.
(1991), of discrimination between the facets, thus the resilience
gamified facet does not covary with SJT flexibility, adaptability, and
decision‐making, adaptability is not significantly related with the re‐
silience and decision‐making SJT dimensions, flexibility presents no
covariance with the resilience and decision‐making SJT facets and
the decision‐making gamified dimension is not related to any of the
other SJT facets. Additionally, residuals' analysis showed that residu‐
als are symmetrically distributed around zero point, i.e, 95% of them
is around zero (Joreskog & Sorbom, 1988), the average off‐diagonal
absolute standardized residuals are low, i.e., 0.04 (Bentler, 200 4)
and the Standardized Root Mean Square Residual (SRMR) of 0.48 is
lower than the cut‐off rate of 0.50 (Hu & Bentler, 1999). Moreover,
the careful inspection of residual correlation matrix, proved no large
magnitu de of residual correl ations ranging fro m −0.151 to 0.140,
hence ser ving only to minimal discrepancy in fit variance between
the hypothesized model and the sample dat a (Hu & Bentler, 1999).
However, this marginal fit (p = 0.05) made the researchers suspicious
of further model modifications to achieve a better fit and reassess
unexplained model variance that may be detected to other equa‐
tions' elements (Bentler, 2004).
To ensure that the transition from a paper and pencil SJT to a
gamifie d environment was h eld smoothly, avoiding po tential variance
due to the utilization of different samples during the validation pro‐
cedure, we employed cross‐validation analysis over a joint sample of
321 university students, who took the SJT version, and 410 employ
ees and job seekers, who took the gamified SJT version. Accordingly,
multiple‐group measurement invariance tests were performed on
the SJT and gamified SJT scales, to assess cross‐validation among
samples. Previous research has shown that when parallel data exist
TABLE 2 Confirmatory factor analysis results of the SJT (N = 321)
S–B Scaled
x2CFI NFI RMSEA RMSE A 90% CI Resilience Scale Adaptability Scale Flexibility Scale Decision‐Making Scale Model‐fit
SJT resilience 81 7.7 025 (p
> 0.001)
0.890 0.870 0.050 0.042−0.058 0.352** Accepted
SJT adaptability 205.7359 (p
< 0.05)
0.854 0.843 0.058 0.046−0.070 0.378** Marginal accepted
SJT flexibility 142.7897 (p
> 0.001)
0.910 0.891 0.035 0.019−0.048 0.290** Accepted
SJT Decision‐Making 508 .3967 (p
< 0.05)
0.792 0.749 0.070 0.060−0.078 0.364** Rejected
Note. **Correlation is significant at the 0.01 level (two‐tailed).
   GEORGIOU Et al.
across groups, multiple‐group analysis offers a powerful test of the
equivalence of factor solutions across samples because it rigorously
assesses measurement properties (Bagozzi & Yi, 1988; Bollen, 1989;
Marsh, 1995; Marsh & Hocevar, 1985). Table 3 presents the fit esti‐
mates for the models in the invariance hierarchy.
The baseline model in all cases shows adequate fit with the data
as all fit indices are within the predefined cut off values. With the ex‐
ception of the fourth model, comparing with the baseline at all scale
cases, all χ2 differences along with the other fit indices indicate good
fit to the data. Also, CAIC is decreasing after baseline model compar‐
ison, ser ving as an additional sign of nonchance lack of invariance.
When the baseline model is constrained in factor loadings, factor
correlations, and factor variances, all construct cases present large
chi‐square difference with the baseline models and adequate criteria
values. More specifically, the chi‐square difference from the base‐
line model is Δχ2 (13, N = 731) = 127.797, p < 0.001, in resilience, Δχ2
(15, N = 731) = 168.888, p < 0.001 in adaptability, Δχ2 (13, N = 731)
= 230.50 4, p < 0.001 in flexibilit y, and Δχ2 (13, N = 731) = 195.547,
p < 0.001 in decision‐making, indicating large magnitude differences
and hence rejection of the final hierarchy of models. Subsequently,
the fit indices are lower than the cut‐off values at all cases. These
chi‐square differences were relatively large; however, invariant fac
tor variances are considered the least important in testing measure
ment property invariance across groups (Bollen, 1989; Marsh, 1995).
Therefore, some evidence of partial measurement invariance is ap
parent across the samples (Vandenberg & Lance, 2000).
The present study introduced a new gamified instrument to meas‐
ure some of the skills and competencies that employers of ten look
for when hiring young graduates. We gamified a SJT assessment
measuring four constructs: resilience, adaptability, flexibility, and
decision‐making. These dimensions have been shown to be reliable
and factorially distinct; whereas, the convergent and discriminant
validity of the gamified measure was established by showing its as‐
sociations with well‐established self‐report measures. To be more
specific, having first developed and tested a SJT, we found prelimi‐
nary support that the addition of game element s (e.g., avatars, feed‐
back, narrative and visual/voice overs) to the SJT and its conversion
into an adventure online story confirms the construct validity of
the measure. Admitedly, the strength of the convergence described
above is not well founded given that the majority of fit indexes are
marginally acceptable by the existing literature (Hu & Bentler, 1999).
However, as many scholars presumed in recent research, goodness
of fit indexes are highly depending on sample sizes, estimators, or
distributions and should be treated not as golden rules but supple‐
mentar y to human judgment (Barrett, 2007; Bentler, 2007; Marsh,
Hau, & Wen, 20 04). For this reason, we tried to give ourselves some
degrees of freedom in evaluating CFA models and base our deci‐
sions in more standardized evidence as provided by RMSE A index
coupled with the accuracy of it s estimation provided by RMSE A 90%
Confidence Intervals. Indeed, a more thorough inspection of poten
tial modifications in the item structure is needed, which should be a
part of future research.
We believe that this novel instrument contributes to research
and prac tice in two main ways. First, the current gamified assess‐
ment method is among the first validated instruments using game
element s in order to assess candidates' soft skills. To the best of our
knowledge, this is probably the first published study exploring the
psychometric properties of a gamified selec tion method. Our study
contributes to research on gamification and selection methods ex‐
ploring the construct validity of a new gamified selection method
emphasizing the use of gamification that focus on behavior and not
traits. Contrary to personality inventories that include self‐report
data and are prone to social desirability bias (e.g., Mayer & Salovey,
1997; Roberts, Matthews, & Zeidner, 2010), a gamified assessment
extra cts informat ion about candi dates' intention and a s a result might
be less prone to faking and distortion. Accordingly, the scenarios in‐
corporated in the gamified assessment are the SJT scenarios, which
assess work‐related behaviors and thus are more likely to predict
future work behaviors than sur vey‐based inventories (Armstrong,
Ferrell et al., 2016a). The use of game elements might enable the
test to assess skills indirectly, making it thus difficult for candidates
to distort their answers, since the desirable behaviors are not so
obvious to them. Also, a gamified assessment might enhance fun,
motivation, and engagement as well as improve predictive validit y
(Collmus, Armstrong, & Landers, 2016; Yan, Conrad, Tourangeau, &
Couper, 2010). Future research is needed to claim that the gamified
assessment method we presented is more valid, fair, fun, or engaging
beyond and above traditional selection methods, such as personal‐
ity tests. We have taken the first steps to explore both applicant
reactions, such as organizational attractiveness and recommenda
tion intentions (Gkorezis, Georgiou, Nikolaou, & Perperidou, 2019;
Nikolaou & Georgiou, 2017) and par ticipants' per formance (e.g., self‐
reported job and academic performance in Nikolaou, Georgiou, &
Kotsasarlidou, 2019), providing preliminary support that the current
gamified assessment has the potential to be an attractive and valid
alternative to traditional selection methods.
Second, this study contributes to the literature on SJT. Our
findings provide support that game elements, such as storylines,
feedback, avatars, visual, and voice overs can be successfully ap
plied to SJTs and effectively assess candidates' soft skills. Our
study ex tends previous studies that dictated the applicability of
SJTs in high fidelity modes, such as video, multimedia, and inter
active formats (Lievens & Sackett, 2006a) and gamified contexts
(Armstrong, L anders et al., 2016b). Webcams, static video re
corders, and multimedia have been successfully applied to SJTs
increasing the fidelity of presenting the scenarios leading to in
creased realism and more positive applicant reactions, as well as
higher predictive validity (Lievens & Sackett, 2006b; Oostrom et
al., 2010; Rockstuhl et al., 2015). The incorporation of games el
ements into a SJT is likely to increase fidelit y, fun, fairness, and
favorable applicant reactions, while eliciting behaviors and pre
dicting job performance more successfully. Along these lines, our
TABLE 3 Cross‐validation analysis results (N = 731)
Model χ2 (Ν = 731) Df**χ2 diff*  Df χ2 diff RMSEA GFI CFI CAIC Model f it
No constraints (baseline) 62.104***53 0.023 (0.000–0.044) 0.965 0.927 −295.61 ACCEPT
Factor loadings invariant 89.170***63 27.066***10 0.036(0.016−0.053) 0.956 0.867 −336.4 41 ACCEPT
Factor loadings an d factor
correlation invariant
89.257***64 27.153***11 0.036(0.015−0.052) 0.956 0.867 −3 42.70 4 ACCEPT
Factor loadings, factor correlation,
and factor variances invariant
190.054***66 127.797***  13 0.077(0.065−0.090) 0.900 0.729 −255.405 REJECT
No constraints (baseline) 106.830***76 – – 0.035
(0.017−0.050) 0.956 0.895 407. 7 7 ACCEPT
Factor loadings invariant 146.916***88 40.086***12 0.046(0.032−0.058) 0.948 0.832 −44 8.42 ACCEPT
Factor loadings an d factor
correlation invariant
147.123***89 40.207***  13 0.045(0.032−0.058) 0.938 0.795 −454.97 ACCEPT
Factor loadings, factor correlation,
and factor variances invariant
275.718***91 168.888***15 0.080(0.069−0.091) 0. 876 0.749 339.91 RE JECT
No constraints (baseline) 61.707***  53 0.023(0.000−0.044) 0.96 8 0.926 −29 6.3 4 ACCEPT
Factor loadings invariant 90.465***63 28.758***10 0.037(0.018−0.053) 0.955 0.894 335.14 ACCEPT
Factor loadings an d factor
correlation invariant
91.779***64 30.072***11 0.037(0.018−0.053) 0.955 0.894 −340 .58 ACCEPT
Factor loadings, factor correlation,
and factor variances invariant
292.211***66 230.504***13 0.104(0.092−0.116) 0.850 0.721 −153 . 6 6 REJECT
No constraints (baseline) 88.618***53 0.046(0.028−0.062) 0.946 0.897 −2 69.77 ACCEPT
Factor loadings invariant 120.854***63 32.236***10 0.054(0.039−0.068) 0.942 0.827 −305.15 MARGINALLY
Factor loadings an d factor
correlation invariant
120.874***64 32.256***  11 0.053(0.038−0.067) 0.942 0.827 −311 .89 MARGINALLY
Factor loadings, factor correlation,
and factor variances invariant
284.165***66 195.547***13 0.102(0.090−0.114) 0.850 0.716 −1 62 . 13 REJEC T
Note. Empt y cells indicate no c alculation. The final 731 sample par ticipants derived from the summation of 410 gamified SJT test takers and 321 SJT test t akers. GFI, goodness‐of‐fit index; CFI, compara‐
tive fit index; CAIC, Conditional Akaike Information Criterion; diff., difference.
*Difference in the chi‐square statistic between a given model and the baseline model. **Difference in degrees of freedom between a given model and the baseline model. ***p < 0.01.
   GEORGIOU Et al.
study corresponds to the widespread use of the internet and game
playing (Campbell, 2015) addressing calls for the exploration of
the efficiency of gamification in employee selection.
7.1 | Practical implications
The current research has important practical implications for or‐
ganizations. By establishing the construct validity of the gamified
selection method, recruiters might use a new selection tool that ef‐
fectively assesses soft skills, with the potential to reduce the risk
and the “cost ” of bad hires. Organizations might also improve their
selection processes replacing or supplementing traditional selection
methods with gamified selection methods.
Gamified selection methods share several benefits that other
multimedia tests have. A gamified selection method can be admin‐
istered over the Internet to a large group of applicants and on var‐
ious locations while automatically recording candidates' responses
(Oostrom, Born, & Van Der Molen, 2013). Also, it focuses on be
havior and not on personality traits that appear to be a less import‐
ant criterion in employee selection (Viswesvaran & Ones, 200 0).
Gamified selection methods might be used to obtain higher quality
information from candidates since they are more difficult for test‐
takers to fake and better able to elicit behaviors than traditional se‐
lection methods (Armstrong, Landers et al., 2016b). Moreover, an
interpersonally oriented multimedia SJT is expected to demonstrate
higher criterion‐related validity than a paper–pencil test (Lievens &
Sackett , 2006a). The gamified SJT uses verbal and visional cues that
enhance realism and as a result, might provide to future employers
a superior assessment of candidates' skills compared to traditional
selection test s. On top of that, these proper ties might positively af‐
fect applicants' reactions. Applicant s perceive the multimedia tests
as more valid and enjoyable and as a result, they are more satisfied
with the selection process (Richman‐Hirsch, Olson‐Buchanan, &
Drasgow, 200 0).
Employers might also be nefit from the use of a gamified selection
method in increasing their organizational attractiveness and positive
behavioral intentions, such as applicants' job offer acceptance ratio.
Since organizations nowadays employ a diverse workforce, they
should use a selection method that reduces adverse impact and re‐
spects ethnic minority (Oostrom et al., 2013). Hence, a multimedia‐
based assessment method is suggested to result in reduced adverse
impact compared to a paper‐and‐pencil method.
7.2 | Limitations and future research
The present study is not without limitations. From a methodologi‐
cal point of view, the sample size is a main concern for the current
study. As a mat ter of size, samples in these kind of studies should be
larger. For example, the small sample size of 97 common test‐tak
ers of the paper and pencil and the gamified SJT approach is barely
adequate (Hoyle, 1995; Kline, 2005) to perform robust statistical
analyses and establish the construct validity of the gamified SJT. To
this end, the result s are not clear‐cut and the conclusions are not
easily interpretable. To remedy this, we performed a full confirma‐
tory factor analysis with an independent larger sample (N = 410).
The results of the CFA demonstrated a marginally acceptable fit for
the three of the four scales of the game, with the exception of the
decision‐making scale, questioning thus the internal validity of the
four‐factors model. As a future endeavor, post hoc modifications are
required which will lead to a potentially shor ter version of our re‐
search “product,” i.e., the gamified SJT, which will probably assist in
achieving more robust results in forthcoming validation steps (crite
rion‐related and incremental validation procedures). The size of the
sample at the item generation stage is also a limiting factor in ex‐
plaining our results. It would be better if we had employed a higher
number of SMEs according to common practice (Bledow & Frese,
2009; Motowidlo et al., 2006).
In order to explore the construct validity of the gamified SJT,
we reached a model of a marginal fit to data that may need fur ther
examination in the future, especially in criterion‐related studies.
Researchers proved that residuals are not an issue at this time and
the marginal fit achieved is not in the scope of the current paper.
However, further modification of the tested model should be per
formed following the guidelines described in the literature (e.g.,
Bentler, 2004), leading to a higher variance reallocation and there‐
fore a different and probably lighter version of the gamified SJT.
From a practical viewpoint, this study is limited by the fact that
the criterion‐related or incremental validity of the assessment over
traditional selection methods, such as personality or abilit y tests,
has not been established yet (Nunnally & Bernstein, 1994), with the
exception of self‐reported job and academic performance (Nikolaou
et al., 2019). Also, although we employed fantasy and adventurous
stories that are likely to keep people engaged, there might be appli‐
cants who are in favor of games that are obviously related to the job.
Future research could explore applicant reactions, such as perceived
test fairness and appropriateness of selection instrument, among
candidates who complete either or both the SJT and its gamified
version to further support the effectivenss of using game elements
into selec tion methods. Also, another limitation of the gamified SJT
might be any accessibilit y issues for candidates who may not have
the hardware or internet connection required to tr y the assement.
Finally, the current version of the gamified assessment should also
be enriched with the assessment of additional skills, since it cur‐
rently only measures four skills.
7.3| Conclusions
Recently, a number of organizations have employed the use of gami‐
fication and game‐based assessments in employee recruitment and
selection. However, no published empirical studies have explored
the validity of gamification in assessing candidates' skills. Our study
supports that converting a traditional SJT to a gamified assessment,
in order to ef fectively assess candidates' soft skills, such as resil‐
ience, adaptability, and decision‐making can be of value. We first
presented the development of a SJT to form the basis of the gami‐
fied assessment method. Second, the SJT's construct validity was
explored in order to transform it into a gamified assessment method,
and third, the construct validity of the gamified assessment was es‐
tablished. As a result, the current study contributes to research on
the use of game elements in employee selection as well as to SJT
research and prac tice. By eliciting job‐relevant behaviors within the
context of a gamified assessment, increased prediction of future
work behaviors may be possible compared to traditional psycho‐
metric tests. Future research is needed in order to provide fur ther
support that gamified assessment methods can be an accurate and
attractive selection method.
Konstantina Georgiou‐0001‐8272‐9089
Ioannis Nikolaou‐0003‐39675040
Andrews, J., & Higson, H. (20 08). Graduate employability, ‘soft
skills’ versus ‘hard ’ business knowledge: A European study.
Higher Education in Europe, 33(4), 411–422. https://doi.
Armst rong, M. B., Fer rell, J., Collm us, A., & La nders, R. (2 016a). Correcti ng
misconceptions about gamification of assessment: More than SJTs
and badges. Industrial and Organizational Psychology, 9(3), 671–677.
Armstrong, M. B., Landers, R . N., & Collmus, A . B. (2016b). Gamifying
recruitment, selection, training, and performance management:
Game‐thinking in human resource management. In D. Davis & H.
Gangadharbatla (Eds.), Emerging research and trends in gamification
(pp. 140–165). Hershey, PA: IGI G lobal.
Bagozzi, R . P., & Yi, Y. (1988). On the evaluation of structur al equation
models. Journal of the Academy of Marketing Science, 16 , 74–94.
htt ps:// /10.10 07/BF02723327
Bagozzi, R . P., Yi, Y., & Phillips, L . W. (1991). Assessing construct validity
in organizational research. Administrative Science Quarterly, 36, 42 1–
458. 07/2393203
Barret t, P. (2007). Str uctural equation modeling: Adjusting model fit.
Personality and Individual Differences, 42, 815–824.
Bentler, P. (2004). EQS 6 s tructural equations program manual. Encino, CA:
Multivariate Software.
Bentler, P. (2007). On tests and indices for evaluating structural mod‐
els. Personality and Individual Differences, 42, 825–829. https://doi.
Bergman, M. E., D rasgow, F., Donovan , M. A., Henning, J. B., & Jur aska, S.
E. (200 6). Scoring situational judgment test s: Once you get the data,
your troubles begin. International Journal of Selection and Assessment,
14(3), 223–235.‐2389.2006.00345.x
Bledow, R., & Frese, M. (2009). A situational judgment test of personal
initiative and its relationship to performance. Personnel Psychology,
62, 229–258. https :// .1111/j .1744 ‐6570. 2009.01137.x
Bollen, K. A. (1989). Struc tural equations with latent variables. New York,
NY: W il ey.
Campbell, C. (2015). Here's how many people are playing games in America.
Retrieved from
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant val‐
idation by the multitrait‐multimethod matrix. Psychological Bulletin,
56(2) , 81–105. htt ps:// /10.1037/h00 46016
Chamorro‐Premuzic, T., Akht ar, R., Winsborough, D., & Sherman, R. A.
(2017). The da tafication of t alent: How techn ology is advanc ing the
science of human potential at work. Current Opinion in Behavioral
Sciences, 18, 13–16.
Chen, F., Curran, R. J., Bollen, K. A., Kirby, J., & Paxton, P. (2008).
Evaluation of the use of fixed cutof f points in RMSEA test st atistic
in structural equation models. Sociological Methods & Research, 36(4),
Chow, S., & Chapman, D. (2013). Gamifying the employee recruitment pro
cess. Paper presented at the Proceedings of the First International
Conference on Gameful Design, Research, and Applications.
Christian, M. S., Edwards , B. D., & Bradley, J. C. (2010). Situational judg‐
ment tests: Constructs assessed and a meta‐analysis of their crite‐
rion‐related validities. Personnel Psychology, 63(1), 83–117. https://
doi .or g/10.1111/ j.174 4‐6570 .200 9.0116 3.x
Cicchet ti, D. V. (1994). Guidelines, criteria, and rules of thumb for
evaluating normed and standardized assessment instruments in
psychology. Psychological Assessment, 6(4), 284–290. https://doi.
org /10.1037/104 0‐3590. 6.4 .28 4
Clarke, M. (2016). Addressing the soft skills crisis. Strategic HR Review,
15(3), 137–139.‐03‐2016‐0026
Collmus , A. B., Armstrong, M. B., & Landers, R . N. (2016). Game‐think
ing within social media to recruit and select job candidates. In R.
N. Landers & G. B . Schmidt (Eds.), Social media in employee selec
tion and recruitment (pp. 103–124). Cham, Switzerland: Springer
Deterding, S., Björk, S. L ., Nacke, L. E., Dixon, D., & Lawley, E. (2013).
Designing gamification: creating gameful and playful experiences. Paper
presented at the CHI '13 Extended Abstrac ts on Human Factor s in
Computing Systems.
Dicheva, D., Dichev, C., Agre, G ., & Angelova, G. (2015). Gamif ication
in education: A sys tematic mapping study. Journal of Educational
Technology & Society, 18(3), 75.
Fetzer, M., McNamara, J., & Geimer, J. L. (2017). Gamification, serious
games and personnel selection. In H. W. Goldstein, E. D. Pulakos, J.
Passmore, & C. Semedo (Eds.), The Wiley B lackwell handbook of the
psychology of recruitment, selection and employee retention (pp. 293–
309). Chichester, UK: John Wiley & Sons.
Gkorezis , P., Georgiou, K., Nikolaou, I., & Perperidou, S. (2019). Game‐
based asse ssment vs situational judgment test: Applicant reac tions
and recruitment outco mes through a moderated mediatio n model.
Paper accepted for presentation to the 19th EAWOP Congress,
Turin, Italy.
Gray, A. (2016, Januar y 19). The 10 skills you need to thrive in the fourth
industrial revolution. Retrieved from
Hawkes, B., Cek, I., & Handler, C. (2017). The gamification of employee
selection tools: An exploration of viabidit y, utility, and future direc‐
tions. In J. C. Scott, D. Bartram, & D. H . Reynolds (Eds.), Next gener
ation technology‐enhanced assessment: Global perspectives on occupa
tional and workplace testing (pp. 288–313). New York, NY: Cambridge
University Press.
Haynes, S. N., Richard, D. C. S., & Kubany, E. S. (1995). Content valid‐
ity in psychological asse ssment. A functional approach to concepts
and methods. Psychological Assessment, 7, 238–247. https://doi.
org /10.1037/104 0‐3590.7.3.23 8
Hoyle, R . (1995). Structural equation mo deling: Concepts, issues , and appli‐
cations. Thousand Oaks, CA: Sage.
Hu, L., & B entler, P. (1999). Cut off criteria for fit indexes in cova‐
riance structure analysis: Conventional criteria ver sus new al‐
ternatives. Structural Equation Modeling, 6, 1–55. https://doi.
org /10.108 0/1070551990954 0118
Joreskog , K. G., & Sorbom, D. (1988). Prelis‐A prog ram for multivariate dat a
screening and data summarization. A preprocessor for LISREL. Chicago,
IL: Scientific Software.
   GEORGIOU Et al.
Kenny, D. A., & McCoach, D. B. (2003). Effect of the number of vari‐
ables on measures of fit in str uctural equation modeling. Structural
Equation Modeling: A Multidisciplinary Journal, 10(3), 333–351.
Kline, R . B. (20 05). Principles and practice of s tructural equatio n modeling
(2nd ed.). New York, NY: The Guilford Press.
Krumm, S., Lievens, F., Hüffmeier, J., Lipnevich, A. A., Bendels, H ., &
Hertel, G. (2015). How “situational” is judgment in situational judg‐
ment tests? Journal of Applied Psychology, 100(2), 3 99– 416.
Kubisiak, C., Stewart, R ., Thornbury, E., & Moye, N. (2014, May).
Development of PDRI 's learning agility simulation. In E. C. Popp
(Chair), Challenges and innovations of using game‐like assessments
in selection. Symposium presented at the 29th Annual Conference
of the Society for Industrial and Organizational Psycholog y,
Honolulu, HI.
Laker, D. R., & Powell, J. L. (2011). The differences between hard and soft
skills and their relative impact on tr aining transfer. Human Resource
Development Quarterly, 22(1), 111122.
Landers, R. N . (2014). Developing a theory of gamified learning: Linking
serious games and gamification of learning. Simulation & Gaming,
45(6), 752–768. ht tps://doi.o rg/10.1177/1046 878114563660
Laume r, S., Eckhard t, A., & Weitzel , T. (2012). Online g aming to find a new
job‐examining job seekers' intention to use serious games as a self‐
assessment tool. German Journal of Human Resource Management,
26(3), 218–240. /10.1177/239700221202600302
Lee, K., & Ashton , M. C. (2004). Psychometric pro perties of the HEXACO
personality inventory. Multivariate Behavioral Research, 39(2), 329–
Lievens, F. (2017). Integrating situational judgment tests and assessment
centre exercises into personality research: Challenges and further
opportunities. European Journal of Personality, 31(5 ), 487–5 02.
Lievens, F., & De Soete, B. (2012). Simulations. In S. Schmit t (Ed.), The
Oxford handbook of p ersonnel assessm ent and selection (pp. 383–410).
New York, NY: Oxford University Press.
Lievens, F., & Motowidlo, S. J. (2016). Situational judgment test s: From
measures of situational judgment to measures of general domain
knowledge. Industrial and Organizational Psychology, 9(1), 3–22.
Lievens, F., Peeters, H., & Schollaert, E. (200 8). Situational judgment
tests: A review of recent research. Personnel Review, 37(4), 426–441.
htt ps:// /10.1108/00 483480810877598
Lievens, F., & Sackett, P. R. (2006a). Situational judgment tests in high
stakes settings: Issues and strategies with generating alternate
forms. Journal of Applied Psychology, 92(4), 10 43–1055.
Lievens, F., & Sackett, P. R. (2006b). Video‐based versus written situa‐
tional judgment tests: A comparison in terms of predictive validity.
Journal of Applied Psychology, 91(5), 1181.
MacCallum, R. C., Browne, M. W., & Sugawara, H . M. (1996). Power
analysis and determination of sample size for covariance str uc‐
ture modeling. Psychological Methods, 1, 130–149. https://doi.
org /10.1037/1082‐989X .1.2 .130
Malone, T. W., & Lepper, M. (1987). Making learning fu n: A taxonomy of
intrinsic motivations for learning. Hills‐Dale, NJ: Erlbaum.
Marsh, H. W. (1995). Confirmatory fac tor analysis models of factorial in‐
variance: A multifaceted approach. Structural Equation Modeling, 1,
5–34. htt ps:// 80/1070 5519409539960
Marsh, H. W., Hau, K., & Wen, Z. (20 04). In search of golden rules:
Comment on hypothesis‐testing approaches to setting cutoff val‐
ues for fit indexes and danger s in overgeneralizing Hu and Bentler's
1999 findings. Structural Equation Modeling, 11, 320–341. https://doi.
org /10.1207/s153280 07sem1103_2
Marsh, H. W., & Hocevar, D. P. (1985). The application of confirmator y
factor analysis to the study of self‐concept: First and higher order
facto r structur es and their inva riance across ag e groups. Psychological
Bulletin, 97, 562–5 82 .
Martin, A. J., Nejad, H., Colmar, S., & Liem, G. A . (2012). Adaptability:
Conceptual and empirical perspectives on responses to change, nov‐
elty and uncertainty. Australian Journal of Guidance and Counselling,
22, 58–81.
Mayer, J.‐S., & Salovey, A. (1997). What is emotional intelligence. In P. S.
D. J. Salovey (Ed.), Emotional development and emotional intelligence:
Implications for educators (pp. 3–31). New York, NY: Harper Collins.
McDaniel, M. A., Hartman, N. S., Whetzel, D. L., & Grubb, W. L. (2007).
Situational judgment tests, response instructions, and validit y:
A meta‐analysis. Personnel Psychology, 60(1), 63–91. https://doi.
org /10.1111/j.1744‐6570. 20 07.000 65. x
Michael, D. R., & Chen, S. L. (2005). Serious games: Game s that educ ate,
train, and inform. Boston, MA: Muska & Lipman/Premier‐Trade.
Mincemoyer, C. C., & Perkins, D. F. (2003). Assessing decision‐making skills
of youth. Paper presented at the Forum for Family and Consumer
Moore, G . C., & Benbasat, I. (1991). Development of an instrument to
measure the perceptions of adopting an d information technology in‐
novation. Information Systems Research, 2(3), 192–222.
Motowidlo, S. J., Dunnette, M . D., & Carter, G. W. (1990). An al‐
ternative selection procedure: The low‐fidelity simulation.
Journal of Applied Psychology, 75(6), 640–647. https://doi.
org /10.1037/0021‐9010.75.6 .640
Motowidlo, S. J., Hooper, A. C., & Jackson, H. L. (2006). Implicit policies
about relations between personality traits and behavioural ef fec‐
tiveness in situational judgement items. Journal of Applied Psychology,
91, 749–761 .
Murphy, K. R ., & Dzieweczynski, J. L. (2005). Why don't measures of
broad dimensions of p ersonality perform better as predic tors of job
performance? Human Performance, 18(4), 343–357.
Nah, F.‐F.‐H., Zeng, Q., Telaprolu, V. R., Ayyappa, A. P., & Eschenbrenner,
B. (2014). Gamification of education: A review of literature. Paper pre‐
sented at the International conference on hci in business.
Nikolaou, I., & Georgiou, K . (2017). Serious gaming and applicants' re
actions; T he role of openness to experience. In M. Armstrong, D. R.
Sanchez & K. N. Bauer (2017): Gaming and gamification IG NITE:
Current t rends in research and application. 32nd Annual Conference
of the Society for Industrial and Organizational Psycholog y, Orlando,
Nikolaou , I., Georgi ou, K., Baue r, T. N., & Truxillo, D. M. (2 019). Technolog y
and applicant reactions. In R. N. Landers (Ed.), Cambridge handbook
of technology and employee behavior (pp. 100–130). Cambridge, UK:
Cambridge University Press.
Nikolaou, I., Georgiou, K ., & Kotsasarlidou, V. (2019). Exploring the rela‐
tionship of a game‐based assessment with performance. The Spanish
Journal of Psychology, 22, E6.
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. New York,
NY: McGraw‐Hill Inc.
Olson‐Buchanan, J. B., & Drasgow, F. (2006). Multimedia situational
judgment tests: The medium creates t he message. In J. Weekley & R.
Ployhar t (Eds.), Situational judgment tests: Theory, measurement, and
application (pp. 253–278). San Francisco, CA : Jossey‐Bass.
Oostrom, J. K., Born, M. P., Serlie, A. W., & Van Der Molen, H . T. (2010).
Webcam testing: Validation of an innovative open‐ended multime
dia test. European Journal of Work and Orga nizationa l Psychology, 19,
Oostrom, J. K., Born, M. P., & van der Molen, H. T. (2013). Webcam
tests in personnel selection. The Psychology of Digital Media at Wor k,
Richman, W. L., Kiesler, S., Weisband, S., & Drasgow, F. (1999). A
meta‐analytic study of social desirability distortion in computer‐
administered questionnaires, traditional questionnaires, and
interviews. Journal of Applied Psychology, 84(5), 754–775. https: //doi.
org /10.1037/0021‐9010.84.5.754
Richman‐Hirsch, W. L., Olson‐Buchanan, J. B., & Drasgow, F. (200 0).
Examining the impact of administration medium on examinee per
ceptions and attitudes. Journal of Applied Psychology, 85(6), 880–887.‐9010.85.6.880
Roberts, R. D., Matthews, G., & Zeidner, M. (2010). Emotional intel‐
ligence: Muddling through theory and measurement. Industrial
and Organizational Psychology, 3(2), 140–144. h tt ps://doi.
org /10.1111/j.1754 ‐9434. 2010.01214 .x
Robles, M. M. (2012). E xecutive perceptions of the top 10 soft skills
needed in today's workplace. Business Communication Quarterly,
75(4), 453–465. ht tp s://doi.or g/10 .1177/108 0569 9124 60 40 0
Rockstuhl, T., Ang, S ., Ng, K. Y., Lievens, F., & Van Dyne, L. (2015). Putting
judging situations into situational judgment tests: Evidence from in‐
tercultural multimedia SJTs. Journal of Applied Psychology, 100, 464
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection
methods in personnel psychology: Practical and theoretical implica
tions of 85 years of research findings. Psychological Bulletin, 1 24(2),
The digit al fut ure of work : What sk ills will b e need ed? (2017, July). Retri eved
Vandenberg, R. J., & Lance, C. E. (20 00). A review and synthesis of the
measurement invariance literature: Suggestions, practices, and rec
ommendations for organizational research. Organizational Research
Methods, 3(1), 4–70. https :// /10.117 7/109442810 0310 02
Viswesvaran, C., & Ones, D. S. (20 00). Perspec tives on models of job
performance. International Journal of Selection and Assessment, 8(4),
216–226. https://doi.o rg /10.1111/1468‐2389.00151
Wagnild, G . M., & Young, H. M. (1993). Development and psychomet‐
ric evaluation of the resilience scale. Journal of Nursing Management,
1(2), 165–178.
Yan, T., Conrad, F. G., Tourangeau, R., & Couper, M. P. (2010). Should
I stay or should I go: The effects of progress feedback, promised
task dur ation, and length of questionnaire on completing web sur‐
veys. International Journal of Public Opinion Research, 23(2), 131–147.
Zichermann, G., & Cunningham, C. (2011). Gamification by design:
Implementing game mechanics in web and mobile apps. Cambridge,
MA: O'Reilly Media Inc.
Additional suppor ting information may be found online in the
Suppor ting Information section at the end of the article.
How to cite this article: Georgiou K, Gouras A, Nikolaou I.
Gamification in employee selection: The development of a
gamified assessment. Int J Select Assess. 2019;00:1–13.
https ://doi.or g/10.1111/ ijsa.12240
... Likewise, researchers argue that the mere implementation of gamification mechanisms does not automatically lead to improved HR practices and tools (Hamari et al., 2014;. Instead, the current literature emphasizes the need to consider employees' characteristics and the need for meaningful gamification design (i.e., that evokes intended behavior) (Georgiou et al., 2019;Ivan et al., 2019;Sailer et al., 2013;. ...
... Optimize talent management measures. A number of studies (n = 8) have studied game design elements in talent management measures (Buil et al., 2019;Chamorro-Premuzic et al., 2017;Georgiou et al., 2019;Tansley et al., 2016). Recent literature deals with the use of gamification in assessment centers and talent identification (e.g. ...
... Recent literature deals with the use of gamification in assessment centers and talent identification (e.g. Georgiou et al., 2019;Tansley et al., 2016). ...
A firm's entrepreneurial orientation (EO) is its propensity to act proactively, innovate, take risks, and engage in competitive and autonomous behaviors. Prior research shows that EO is an im-portant factor for new ventures to overcome barriers to survival and fostering growth, measured by annual sales and employment growth rates. In particular, individual-level EO (IEO) is an important driver of a firm’s EO. The firm’s ability to exploit opportunities appearing in the mar-ket and to achieve superior performance depends on the employees’ skills and experiences to act and think entrepreneurially. The main objective of this dissertation is to investigate how and when employees engage in entrepreneurial behaviors at work. Building on three essays, this dissertation takes an interdisciplinary approach to employee entrepreneurial behaviors in new ventures, encompassing both entrepreneurship and gamification research. The first main contri-bution proposed in this field is a more nuanced understanding of how employee entrepreneurial behaviors help young firms cope with growth-related, organization-transforming challenges (i.e., changes in organizational culture that accompany growth, the introduction of hierarchical structures, and the formalization of processes). When new ventures grow, employees’ IEO tends to manifest in introducing technological innovations and business improvements rather than in actions related to risk-taking. Second, this dissertation reveals the relevance of self-efficacy for entrepreneurial behaviors and explores how gamification can enhance employee entrepreneurial behaviors in new ventures. Based on these findings, this dissertation contributes to EO research by highlighting the role of IEO as a building block for EO pervasiveness. This research further develops our knowledge on the use of gamification in new ventures. This cu-mulative dissertation is structured as follows. Part A is an introduction to the study of entrepre-neurial behaviors. Part B contains the three essays.
... Although game concepts have sometimes appeared in the context of employee selection, gamified assessment, gameful design, and GBA have recently become more mainstream as assessment concepts, with greatly increased awareness in the assessment community (e.g., Bhatia & Ryan, 2018;Chamarro-Premuzic et al., 2016) and a rapid growth in associated research (e.g., Georgiou et al., 2019;Landers et al., in press; the special issue of the International Journal of Selection and Assessment (IJSA) in which the present article is found). The rhetoric surrounding the integration of game concepts into assessment suggests improved applicant reactions, improved fidelity, improved measurement, reduced bias, and improved fairness, among many other attractive claims. ...
... A multiple-choice personality assessment might be gamified by converting it into a story, a type of gamification called storification (Landers & Collmus., in press). An existing situational judgment test might be redesigned by adding fantasy or interactive elements to make it more interesting (Georgiou et al., 2019). In all these cases, the assessment method itself does not change as a result of gamification; although the final assessment has been "gamified," it becomes a gamified cognitive ability test, a gamified multiple choice personality assessment, or a gamified situational judgment test, respectively. ...
... Given the much lower costs to create gameful assessments or gamifying existing assessments, there is more research in this area relative to GBA, although employment-related research remains sparser, a key area of focus for the present special issue of IJSA. On the gameful design side, Georgiou et al. (2019) gamefully designed a situational judgment test, integrating fantasy, story, and progress tracking game elements to measure resilience, adaptability, flexibility, and decision-making. With quite a different approach, Brown et al. (2019; in press) gamefully designed a "social intelligence" test by imagining how social relationships might be visually represented by avatars, asking assessees to view animations of various interpersonal conflicts and situations engaged in by shapes (e.g., squares, circles, and triangles) rather than humans. ...
Full-text available
Reflecting practitioner efforts to improve existing psychometric assessments, a new games-related class of assessments and assessment design approaches has emerged, and research is beginning to examine it as well. Understanding this class of assessments requires understanding of three major terms. First, game-based assessment refers to an assessment method in which job candidates are players participating in a core gameplay loop while trait information is inferred. Second, gameful design refers to practices by assessment professionals using game mechanics or other game concepts to guide their decision-making in assessment design. Third, gamification refers to practices by assessment professionals changing existing assessments by adding game mechanics and applying game concepts to them. Whereas game-based assessment is itself an assessment method, the other two are (re)design strategies. In this review, we explore these distinctions in finer terms, describe the game design and development processes necessary to create a functional game-based assessment, describe theoretical and practical concerns related to the use or addition of game mechanics in traditional assessment methods, and present evaluation guidelines for the entire assessment class. Overall, we conclude that the future of games-related assessment is promising, but further progress will require a focus on interdisciplinary understanding and much greater integration of design and development into assessment theory.
... The gamified survey aimed to stir curiosity and thus enhance the response rate. The gamified survey was also offered an opportunity to check for social desirability bias (Georgiou et al., 2019;Stodel, 2015). Both survey versions contained the same formulations, sequence, and scales. ...
... Additionally, for those respondents who opted for the gamified version of the survey, the perception of the power relation between researcher and respondent (Denzin, 1989) and the feeling of being evaluated (Armstrong et al., 2016) should have been further ameliorated because playing a game creates a cognitive load that reduces respondents' concern over social desirability. Accordingly, gamified surveys are less prone to faking and distortion (Georgiou et al., 2019;Stodel, 2015). Comparing the values of the dependent and independent variables collected via the traditional survey to those collected via the gamified survey revealed no significant differences. ...
Full-text available
The topics of ethical conduct and governance in academic research in the business field have attracted scientific and public attention. The concern is that research misconduct in organizations such as business schools and universities might result in practitioners, policymakers, and researchers grounding their decisions on biased research results. This study addresses ethical research misconduct by investigating whether the ethical orientation of business researchers is related to the likelihood of research misconduct, such as selective reporting of research findings. We distinguish between deontological and consequentialist ethical orientations and the competition between researchers and investigate the moderating role of their perceived autonomy. Based on global data collected from 1031 business scholars, we find that researchers with a strong deontological ethical orientation are less prone to misconduct. This effect is robust against different levels of perceived autonomy and competition. In contrast, researchers having a consequentialist ethical orientation is positively associated with misconduct in business research. High levels of competition in the research environment reinforce this effect. Our results reveal a potentially toxic combination comprising researchers with a strong consequentialist orientation who are embedded in highly competitive research environments. Our research calls for the development of ethical orientations grounded on maxims rather than anticipated consequences among researchers. We conclude that measures for ethical governance in business schools should consider the ethical orientation that underlies researchers’ decision-making and the organizational and institutional environment in which business researchers are embedded.
... While employee learning, performance, and wellness are the primary areas that emerged in the literature related to gamification in HRD, a few additional areas are gaining prominence, including examining gamification for knowledge management, employee assessment, and employee engagement. Research even suggests that gamified assessment and selection can better predict future work behavior while increasing process satisfaction (Buil et al., 2020), perceived organizational attractiveness (Buil et al., 2020), and perceived fairness (Georgiou et al., 2019). ...
Gamification integrates game components into contexts such as workplace learning and performance. A decade of research has shown that gamification is prevalent in various settings such as education, healthcare, and business. Recently, gamification has been applied and studied in interventions and contexts related to the field of human resource development (HRD). Given the emerging use of gamification in HRD, this paper undertakes a systematic literature review (SLR) to synthesize existing research on gamification in HRD. This paper identifies four areas where gamification has been studied in HRD: employee learning, task performance, employee wellness, and rising contexts. In addition, this SLR collects and organizes a series of future research directions and offers a set of potential research questions. These future research directions center around four areas of gamification for HRD: designing gamification, influencing factors, experiential outcomes, and sustaining gamification. Implications for HRD practice and research, as well as limitations, are discussed.
More and more companies have started to use nonhuman agents for employment interviews, making the selection process easier, faster, and unbiased. To assess the effectiveness of the above, in this paper, we systematically analyzed, reviewed, and compared human interaction with a social robot, a digital human, and another human under the same scenario simulating the first phase of a job interview. Our purpose is to allow the understanding of human reactions, concluding to a disclosure of the human needs towards human – nonhuman interaction. We also explored how the appearance and the physical presence of an agent can affect human perception, expectations, and emotions. To support our research, we used time-related and acoustic features of audio data, as well as psychometric data. Statistically significant differences were found for almost all extracted features and especially for intensity, speech rate, frequency, and response time. We also developed a Machine Learning model that can recognize the nature of the interlocutor a human interacts with. Although human was generally preferred, the interest level was higher and the shyness level was lower during human-robot interaction. Thus, we believe that, following some improvements, social robots, compared to digital humans, have the potential to act effectively as job interviewers.
Das Kapitel zeigt auf, dass die Entwicklung und der Einsatz von Serious Games einer gründlichen Analyse im Vorfeld, eines durchdachten didaktischen Fundaments und der Berücksichtigung lerntheoretischer, mediendidaktischer und medienpsychologischer Erkenntnisse bedarf. Die Autorin legt dar, dass der Erfolg des Lernens mit Serious Games an die individuelle und organisatorische Akzeptanz des Lernangebotes gebunden ist. Wichtig ist ihr, dass sowohl der tradierten Lehr-Lernweise wie auch der Nutzung medialer Angebote eine dem Lerngegenstand entsprechende Bedeutung beigemessen werden muss.
Der Artikel von Carmen Biel, Dr. Peter Brandt,Jan Koschorreck, und Sabine Schöb skizziert ein situiertes gamifiziertes Assessment für den beruflichen Kompetenzerwerb. Im Mittelpunkt der Darstellung steht ein Branching Szenario für Lehrende in der Weiterbildung. Es werden konzeptionelle Grundlagen vorgestellt sowie empirische Auswertungen hinsichtlich des Einflusses der Lernmotivation und dem Ertrag aus User‐Sicht. Der Beitrag wird mit einer kritischen Diskussion und einem Fazit zum Einfluss von Branching Szenarien auf den Lernerfolg und Lernertrag Lehrender als Zielgruppe beendet.
Purpose The paper aims to expand the authors' knowledge on gamification and the signals sent on behalf of the organization when gamified assessments are used. The authors examine the mechanisms through which the use of gamification into an assessment method may increase the attractiveness of an organization as a prospective employer. Design/methodology/approach The first study examines, following a longitudinal design, the signals that an organization sends to applicants about the organization's symbolic traits (e.g. innovativeness), through the characteristics of a gamified assessment, in terms of enjoyment and flow and impact on organizational attractiveness. Upon clarifying this mechanism, the second study uses an experimental design to provide evidence that people's perceived enjoyment and flow is enhanced when a gamified version of a situational judgment test (SJT) is used, leading to more positive perceptions of organizational characteristics and attractiveness. Findings Positive perceptions of the characteristics of a gamified assessment influenced the attractiveness of the organization via the symbolic organizational traits. Practical implications Organizations should be aware of the signals sent to applicants when different assessment formats (such as gamified assessments) are used. Originality/value The authors' findings contribute to gamification and signaling theory by testing a signaling mechanism in a novel and timely assessment context.
Experiment Findings
Full-text available
Abstract The use of game-based approaches is growing in popularity both in research and practice. However, little research has been done on faking behaviors in game-based assessments (GBAs). Understanding faking in GBAs is relevant as organizations continue to develop and integrate GBAs for selection purposes. This study examines the relationships of faked GBA scores with honest and faked scores from a self-report measure of personality. We collected measures of personality using the Five Factor Model, evaluating four traits relevant to a sales manager position (i.e., high conscientiousness, high extraversion, low neuroticism, and low agreeableness). From our group of participants, we evaluated the degree to which participants faked the self-report measures (i.e., faking extent). We used this measure to identify individuals who faked well (i.e., correctly distorted scores across personality subfactors). These good fakers were compared to poor fakers with results demonstrating significantly improved faked self-report scores but not faked GBA personality scores. This provides preliminary evidence that good fakers can generally manipulate faked scores in the desirable way on self-report measures but may have experienced more difficulty manipulating their scores on the GBA measures used in this particular study. Our findings may be relevant for researchers and practitioners seeking to use GBAs in situations where test-takers may have an incentive to fake (e.g., recruitment and hiring practices).. Our results also contribute to a much-needed research area exploring the various uses and functions of GBAs when compared to traditional measures. Keywords: game-based assessments, self-report, faking extent, personality assessment, Big Five
Full-text available
Our study explores the validity of a game-based assessment method assessing candidates’ soft skills. Using self-reported measures of performance, (job performance, Organizational Citizenship Behaviors (OCBs), and Great Point Average (GPA), we examined the criterion-related and incremental validity of a game-based assessment, above and beyond the effect of cognitive ability and personality. Our findings indicate that a game-based assessment measuring soft skills (adaptability, flexibility, resilience and decision making) can predict self-reported job and academic performance. Moreover, a game-based assessment can predict academic performance above and beyond personality and cognitive ability tests. The effectiveness of gamification in personnel selection is discussed along with research and practical implications introducing recruiters and HR professionals to an innovative selection technique.
Full-text available
Describing the current state of gamification, Chamorro-Premuzic, Winsborough, Sherman, and Hogan (2016) provide a troubling contradiction: They offer examples of a broad spectrum of gamification interventions, but they then summarize the entirety of gamification as “the digital equivalent of situational judgment tests.” This mischaracterization grossly oversimplifies a rapidly growing area of research and practice both within and outside of industrial–organizational (I-O) psychology. We agree that situational judgment tests (SJTs) can be considered a type of gamified assessment, and gamification provides a toolkit to make SJTs even more gameful. However, the term gamification refers to a much broader and potentially more impactful set of tools than just SJTs, which are incremental, versatile, and especially valuable to practitioners in an era moving toward business-to-consumer (B2C) assessment models. In this commentary, we contend that gamification is commonly misunderstood and misapplied by I-O psychologists, and our goals are to remedy such misconceptions and to provide a research agenda designed to improve both the science and the practice surrounding gamification of human resource processes.
This chapter considers definitions and the boundaries between gamification and serious games. It then primarily concentrates on serious games, recognizing that this covers gamification in general and the common challenges and potential benefits. The chapter also covers current uses of serious games. Serious games may have a marked impact on the field of personnel selection. The chapter further discusses the rationale for using gaming techniques for personnel selection and offer practical guidelines for leveraging this methodology in a selection context. Developing and implementing serious games for personnel selection requires adherence to the same psychometric and legal considerations as any other selection tool, but there are some unique aspects that also need to be considered. These are grouped into the following categories: objectives, design and utilization. Each aspect is discussed in turn. Finally, future directions in research and application of gamification and serious games are discussed.
The Cambridge Handbook of Technology and Employee Behavior - edited by Richard N. Landers February 2019
This article reviews three innovations that not only have the potential to revolutionize the way organizations identify, develop and engage talent, but are also emerging as tools used by practitioners and firms. Specifically, we discuss (a) machine-learning algorithms that can evaluate digital footprints, (b) social sensing technology that can automatically decode verbal and nonverbal behavior to infer personality and emotional states, and (c) gamified assessment tools that focus on enhancing the user-experience in personnel selection. The strengths and limitations of each of these approaches are discussed, and practical and theoretical implications are considered.