ArticlePDF Available


Our study explores the validity of a game-based assessment method assessing candidates’ soft skills. Using self-reported measures of performance, (job performance, Organizational Citizenship Behaviors (OCBs), and Great Point Average (GPA), we examined the criterion-related and incremental validity of a game-based assessment, above and beyond the effect of cognitive ability and personality. Our findings indicate that a game-based assessment measuring soft skills (adaptability, flexibility, resilience and decision making) can predict self-reported job and academic performance. Moreover, a game-based assessment can predict academic performance above and beyond personality and cognitive ability tests. The effectiveness of gamification in personnel selection is discussed along with research and practical implications introducing recruiters and HR professionals to an innovative selection technique.
The Spanish Journal of Psychology (2019), 22, e6, 1–10.
© Universidad Complutense de Madrid and Colegio Oficial de Psicólogos de Madrid
In order to gain a competitive advantage and make a
profit from their activities, organizations need a good
strategy. But to gain a sustainable competitive advan-
tage, that can last a long time and should not be easily
imitated by competitors; organizations must have the
people resources in place to successfully implement
the strategy. Along these lines, the need to screen out
talented prospective employees possessing the required
skills to fit the job and meet the performance standards
is apparent for every business. Traditional selection
methods, such as general mental ability and personality
tests, predict job performance to some extent (Ryan &
Ployhart, 2014). A number of researchers have recently
suggested that the use of gamification in personnel
selection, such as game-based assessments, might pre-
dict job performance beyond traditional selection
methods (e.g., Armstrong, Landers, & Collmus, 2016;
Fetzer, Mcnamara, & Geimer, 2017). Game-based assess-
ments is a new assessment method incorporating game
elements in employee selection and is lately widely
applied in personnel selection practice, raising ques-
tions about its ability to predict job performance. To
the best of our knowledge, no published empirical
research has established the effectiveness of game-
based assessments in the employee selection process.
Our study is designed to examine the potential of a
game-based assessment in predicting a number of
performance measures. Specifically, we test the
relationship between a game-based assessment and
performance criteria (e.g., perceived job performance,
Grade Point Average-GPA, perceived Organizational
Citizenship Behavior-OCB) to explore its criterion
related validity. We also explore the extent to which a
game-based assessment predicts performance beyond
traditional selection methods (personality measures
and cognitive ability).
Traditional selection tests and performance
Cognitive ability and personality tests are widely used
nowadays by organizations in an effort to predict
future work performance. Several studies and meta-
analyses support not only the validity of cognitive
ability and personality tests but also their effective
combination in predicting job performance (Schmitt,
2014). Cognitive ability tests measure the levels of gen-
eral cognitive ability or intelligence, as well as aspects
of it (e.g., numerical, verbal, abstract, and spatial
ability). Meta-analytic findings indicate that both gen-
eral cognitive ability and specific cognitive abilities
predict successfully performance and work-related
outcomes (e.g. Ones, Dilchert, & Viswesvaran, 2012).
Moreover, cognitive ability is supported to be the
single best predictor of performance at work, as well
as, of performance outcomes in the majority of job
positions and situations (Schmitt, 2014). As far as
personality is concerned, the most popular personality
Exploring the Relationship of a Gamified
Assessment with Performance
Ioannis Nikolaou, Konstantina Georgiou and Vasiliki Kotsasarlidou
Athens University of Economics and Business (Greece)
Abstract. Our study explores the validity of a game-based assessment method assessing candidates’ soft skills. Using
self-reported measures of performance, (job performance, Organizational Citizenship Behaviors (OCBs), and Great
Point Average (GPA), we examined the criterion-related and incremental validity of a game-based assessment, above
and beyond the effect of cognitive ability and personality. Our findings indicate that a game-based assessment mea-
suring soft skills (adaptability, flexibility, resilience and decision making) can predict self-reported job and academic
performance. Moreover, a game-based assessment can predict academic performance above and beyond personality
and cognitive ability tests. The effectiveness of gamification in personnel selection is discussed along with research and
practical implications introducing recruiters and HR professionals to an innovative selection technique.
Received 30 April 2018; Revised 31 October 2018; Accepted 3 November 2018
Keywords: academic performance, game-based assessments, job performance, selection methods.
Correspondence concerning this article should be addressed to
Ioannis Nikolaou. Athens University of Economics and Business.
Department of Management Science and Technology. 104 34 Athens
How to cite this article:
Nikolaou, I., Georgiou, K., & Kotsasarlidou, V. (2018). Exploring
the relationship of a game-based assessment with performance.
The Spanish Journal of Psychology, 21. e6. Doi:10.1017/SJP.2019.5
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
2 Nikolaou et al.
model is the five-factor model of personality (FFM)
studied extensively in diverse countries and cultures
around the world. The predictive validity of at least
two key factors of the FFM (especially conscientious-
ness but also neuroticism) has been well established
across different job positions and organizations,
whereas, meta-analytic findings (Barrick, Mount, &
Judge, 2001) have also supported the predicted valid-
ity of most personality dimensions of the FFM.
In the performance domain we often study crite-
rion measures, such as academic attainment and
OCB, apart from job performance. OCBs or extra-
role performance are defined as the voluntary and
non-mandatory employee behaviors that positively
influence organizational effectiveness and contribute
to the overall productivity of the organization (Smith,
Organ, & Near, 1983). Both emotional and cognitive
intelligence have been found to be related to organiza-
tional citizenship behaviors (e.g., Cote & Miners, 2006).
Whereas, personality traits, such as agreeableness and
conscientiousness, have been found to predict OCB as
well (e.g., Chiaburu, Oh, Berry, Li & Gardner, 2011).
Similarly, academic performance has been found to be
significantly predicted by personality and cognitive
ability. Academic performance is usually measured
with student grades or grade point average-GPA,
which is supported to predict performance at work
(Roth, BeVier, Switzer, & Schippmann, 1996). A number
of meta-analytic studies exploring the relationship
between personality and academic performance sup-
ported that agreeableness, conscientiousness and
openness to experience, as well as intelligence, predict
academic performance (Poropat, 2009; Strenze, 2007).
The relationship between cognitive ability and aca-
demic performance is also well established (Chamorro-
Premuzic & Furnham, 2008). “Academic performance has
been the criterion for validating IQ tests for over a century,
and one would hardly refer to these tests as “intelligence”
measures if they did not correlate with academic perfor-
mance” (Chamorro-Premuzic & Furnham, 2008, p. 1597).
It is worth reporting that both general cognitive ability
and specific cognitive abilities (working memory, pro-
cessing speed, spatial ability) can predict academic
performance whereas, specific cognitive abilities can
predict academic performance beyond general cogni-
tive ability (Rohde & Thompson, 2007).
To sum up, there is a large body of research which
indicates general mental ability and personality tests
as important predictors of performance. However,
traditional selection methods, such as personality tests,
predict job performance to some extent, whereas,
they are prone to faking and social desirability
(e.g., Morgeson et al., 2007; Ryan & Ployhart, 2014).
Phenomena, that the application of gamification in
employee testing might restrain increasing thus the
assessment’s predictive validity and utility in practise.
Moreover, the advent of technology has started to
render traditional selection methods obsolete, paving
the way for more technologically advanced methods
capable to reduce the cost of hiring and improve appli-
cant reactions.
Game-based assessment methods and performance
Gamification, the application of game-design ele-
ments in non-game contexts (Armstrong et al., 2016),
has recently caught the attention of researchers and
practitioners in Work/Organizational Psychology and
Human Resources Management, as a promising tool in
employee selection. Employee testing methods have
started to incorporate game elements and designs
turning into assessments that are likely to be more
fun and attractive to candidates, as well as more diffi-
cult to fake (Armstrong et al., 2016). The addition of
game elements into the assessments might render the
assessments more difficult for candidates to decode
and identify what the correct answer is, as personality
traits or intentions and behaviors are assessed indi-
rectly. For example, in a gamified Situational Judgement
Test (SJT) the clothing of the scenarios and answers
with game elements might make the desirable behav-
iors less obvious to candidates and as a result, more
difficult to distort intentionally or unintentionally
what their reactions would be in a given situation as it
is away from real life situations.
Moreover, building on the concept of “stealth assess-
ment”, Fetzer et al. (2017) highlighted the potential
of game-based assessments in predicting job perfor-
mance. Stealth assessments can accurately and effi-
ciently diagnose the level of students’ competencies
by extracting continuously performance data that are
gathered during the course of playing/learning (Shute,
Ventura, Bauer, & Zapata-Rivera, 2009). In other words,
stealth assessment is an assessment that is “seamlessly
woven into the fabric of the learning or gaming environment
so that it’s virtually invisible…reducing thus test anxiety
while not sacrificing validity and consistency” (Shute,
2015, p. 63). Along these lines, a gamified assessment
environment might distract candidates from the fact
that they are assessed, reducing test anxiety and pro-
moting behaviors that are more likely to appear uncon-
sciously instead of the desirable or socially acceptable
ones. Game engagement and the use of contexts diag-
nosing how an individual handled a given problem –
similar to work-sampling techniques - might lead to
more robust inferences about performance than tradi-
tional selection inventories that rely on self-reported
measures (Fetzer et al., 2017). Taking into consideration
all the evidence mentioned above, we aim to explore
the effectiveness of the game-based assessment method
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
Game-based Assessment in Selection 3
measuring four soft skills (i.e., resilience, adaptability,
flexibility, and decision-making) by testing whether its
dimensions are related to performance measures over
and above traditional selection measures.
A major challenge that employers nowadays face
when hiring young graduates is the lack of applicants
with the right skills and competencies (Picchi, 2016,
August 31). Among the most desirable soft skills that
employers are looking for are adaptability, flexi-
bility, decision-making, and resilience (e.g., Gray, 2016;
McKinsey & Company, 2017). Resilience, the ability to
bounce back from adversities (Luthans, 2002), might
be vital for both personal and job effectiveness with
numerous positive outcomes in work and academic
settings. For example, resilient individuals are likely to
have higher levels of job performance, job satisfaction
and organizational commitment (e.g., Avey, Reichard,
Luthans, & Mhatre, 2011), as well as, OCB (Paul, Bamel,
& Garg, 2016). Moreover, students with higher levels
of resilience are likely to demonstrate increased aca-
demic performance levels, as well as higher class
participation, enjoyment and self-esteem (Martin &
Marsh, 2006, 2008). Similarly, adaptability, the “response
or people’s adjustment to changing environmental situa-
tions” (Hamtiaux, Houssemand, & Vrignaud, 2013,
p. 130) has positive outcomes in both academic and
work contexts. For example, successful students (GPA
of 80% or more) were found to have high levels of
interpersonal, adaptability, and stress management
skills (Parker et al., 2004). Moreover, high adaptability
is related to positive relationships and behaviors in
school, such as studying, leadership, and reduced
school problems (Brackett, Rivers, Reyes, & Salovey,
2012). In the work context, adaptability is important
in performing well, handling ambiguity, and dealing
with uncertainty and stress (Kehoe, 2000). Whereas,
volunteering to help co-workers (an aspect of OCB) might
require one to adapt to changing co-worker behaviour
(Ployhart & Bliese, 2006, p. 11). Similarly to adapt-
ability, flexibility, defined as the individual’s capacity
to adapt, is likely to have positive outcomes in work,
academic and job seeking settings (Golden & Powell,
2000). Individuals with high levels of flexibility are
able to address different situations creating thus value
to organizations instead of harming them because of
their inability to adjust in changes (Bhattacharya,
Gibson, & Doty, 2005). Moreover, OCB performers are
likely to increase their flexibility in order to adjust to
the requirements of various roles and settings at work
displaying thus behaviors that contribute to organiza-
tional effectiveness (Kwan & Mao, 2011). Organizational
success, especially in changing environments, depends
also largely on effective decision-making, defined as
an intellectual process leading to a response to cir-
cumstances through the selection among alternatives
(Nelson, 1984). Employees who are capable of effective
decision-making devote effort to analyze information
to better understand a company’s threats, opportu-
nities and options, consult other people and collabo-
rate together in making decisions and act proactively
in getting the things done, enhancing thus, organiza-
tional performance (Miller & Lee, 2001). Whereas,
participation in decision-making leads to positive
outcomes within educational settings, such as OCB
(Somech, 2010).
Taking into consideration all the evidence men-
tioned above, we aim to establish the effectiveness of
the gamified selection method that we developed by
testing whether the gamified SJT dimensions are related
to performance and in particular, to performance
measures, OCB and GPA, over and above traditional
selection measures (e.g., personality tests, cognitive
ability); therefore, we state the following hypotheses.
H1: Game-based assessment dimensions will be
positive associated with participants’ job perfor-
mance scores.
H2: Game-based assessment dimensions will be
positive associated with participants’ GPA.
H3: Game-based assessment dimensions will be
positive associated with participants’ OCB.
H4: Game-based assessment dimensions will provide
incremental validity above and beyond the effect of
cognitive ability and personality in predicting par-
ticipants’ job performance scores.
H5: Game-based assessment dimensions will provide
incremental validity above and beyond the effect of
cognitive ability and personality in predicting partici-
pants’ GPA.
H6: Game-based assessment dimensions will provide
incremental validity above and beyond the effect of
cognitive ability and personality in predicting partici-
pants’ OCB.
Sample & Procedure
The study was conducted in Greece during the last
months of 2017, attracting participants via the authors’
university career office, along with post-graduate and
final-year undergraduate students or recent graduates.
We contacted final-year undergraduate students, grad-
uate students or recent graduates to participate in a
survey about a selection method, as these students
were approaching graduation and were likely to search
for employment soon (e.g., van Iddekinge, Lanivich,
Roth, & Junco, 2016).
The data collection took place in two phases. In the
first phase, participants were invited to complete the
self-reported measures of cognitive ability, personality,
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
4 Nikolaou et al.
performance measures and OCB. Three to four weeks
after completion, participating individuals of the first
phase were invited to play the game-based assessment.
193 participants took part in the first phase and 120 of
them participated in the second phase, as well, a
response rate of 62%. The majority of them were
females (64%) with a mean age of 26 years. As far as
their education level is concerned, 46% of the partic-
ipants were final year undergraduates, 15% were
post-graduate students, another 15% were univer-
sity graduates and 24% had already acquired a post-
graduate degree. Most of them (55%) were currently
employed, working in entry-level (57.5%) or middle-
level positions (27.5%).
Cognitive ability. This was measured with items taken
from the International Cognitive Ability Resource
(ICAR) (2014),1. ICAR is a public-domain and open-
source tool created by Condon and Revelle (2014), aim-
ing to provide a large and dynamic bank of cognitive
ability measures for use in a wide variety of applica-
tions, including research. The test includes four item
types: Three-Dimensional Rotations, Letter and Number
Series, Matrix Reasoning, and Verbal Reasoning. We
used the 11 Matrix Reasoning items, which contain
stimuli similar to those used in Raven’s Progressive
Matrices, and which is also more closely related to
abstract reasoning. “The stimuli are 3x3 arrays of geo-
metric shapes with one of the nine shapes missing.
Participants are instructed to identify which of six
geometric shapes presented as response choices will
best complete the stimuli” (ICAR, 2014, p. 2).2 It is
worth noting that the correct answer is only one,
whereas the options “None of the above” and “Do not
know” are also available. An overall score is calcu-
lated, with high scores indicating higher levels of
cognitive ability3.
Personality. Participants completed the 50 items
International Personality Item Pool (IPIP; Goldberg
et al., 2006) to assess the Five-Factor model of
personality. Each scale consisted of 10 items. Standard
IPIP instructions were presented to participants, who
responded on a 5-point Likert-type scale ranging from
1 (inaccurate) to 5 (accurate). Research has reported
good internal consistencies for IPIP factors (see, for
example, Lim & Ployhart, 2006). In our study, reli-
ability estimates were .81 for conscientiousness, .83 for
emotional stability, .83 for extroversion, .79 for agree-
ableness, and .75 for openness to experience.
Performance measures. Overall job performance was
self-evaluated by working individuals only using a
measure used by Nikolaou and Robertson (2001). It
consists of six items where the individual has to indi-
cate whether she/he agrees or disagrees with the
behavior described in a five-point scale ranging from 1
(strongly disagree) to 5 (strongly agree). An overall job
performance score was calculated by averaging the
scores of the six items eliciting internal consistency
reliability of .91. Example items include “Achieve the
objectives of the job” and “Demonstrates expertise in all
aspects of the job”. We also asked participants to indicate
their GPA from their first degree in order to use it as an
alternative to job performance for non-working indi-
viduals. The range of the grading system in Greek
public universities is 0.00–10.00 (Excellent = 8.50–10.00,
Very Good = 6.5–8.49, Good = 5.00 –6.49, and Fail =
0.00–4.59). The GPA reported by participants was the
average grade awarded for the duration of their bach-
elor studies.
Organizational Citizenship Behavior (OCB). OCBs were
self-evaluated by working individuals only using a
measure developed by Smith et al. (1983). It consists of
16 items where the individual has to indicate whether
she/he agrees or disagrees with the behavior described
in a five-point scale ranging from 1 (strongly disagree) to
5 (strongly agree). The original scale measures two sub-
scales; altruism and generalized compliance. However,
for the purposes of the current study we only used the
overall OCB score eliciting internal consistency reli-
ability of .70. Example items include “I help other
employees with their work when they have been absent” and
I exhibit punctuality in arriving at work on time in the
morning and after lunch breaks”.
Soft skills. We used a Game-Based Assessment (GBA)
developed by Owiwi4 in order to measure the four soft
skills evaluated by the game, namely resilience, adapt-
ability, flexibility and decision-making. The four skills
are evaluated following a SJT methodology converted
into an on-line game environment, with fictional char-
acters. The Owiwi game has demonstrated satisfactory
psychometric elements and increased equivalence
with the originally developed SJT measuring the
four soft skills (Georgiou, Nikolaou, & Gouras, 2017).
Resilience is defined as “the developable capacity to
rebound or bounce back from adversity, conflict, and failure
or even positive events, progress, and increased responsi-
bility” (Luthans, 2002, p. 702), “Αdaptability is related to
change and how people deal with it; that is to say, people’s
adjustment to changing environments” (Hamtiaux et al.,
2013, p. 130). Flexibility is defined as the demonstra-
tion of “adaptable as opposed to routine behaviors; it is the
extent to which employees possess a broad repertoire of
3.For an example item visit
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
Game-based Assessment in Selection 5
behavioral scripts that can be adapted to situation-specific
demands” (Bhattacharya et al., 2005, p. 624) and finally
decision-making is defined as an intellectual process
leading to a response to circumstances through selec-
tion among alternatives (Nelson, 1984). Individualized
feedback is provided to all participants upon comple-
tion of the game.
Table 1 presents the inter-correlation matrix of the
study’s variables. An interesting pattern we observe in
the inter-correlation matrix, is that the cognitive ability
measure is not associated with any of the scales mea-
sured here. Also, the self-reported job performance
measure is correlated significantly with conscientious-
ness, emotional stability and openness to experience
for the five-factor model of personality. Moreover, the
OCB measure is associated with agreeableness, simi-
larly to past research on the relationships between
agreeableness and OCB, but not with conscientious-
ness. Finally, the soft skills assessed by the game-based
assessment, which is the main focus of the current
study, are not correlated with any of the criterion mea-
sures, with the exception of the positive correlation
between GPA and decision making, rejecting thus H1
and H3 and only partially confirming H2.
Next, we proceed with the examination of our
research hypotheses. Our main focus in this study is
the suitability of the game-based assessment as a selec-
tion tool, above and beyond the well-established effect
of cognitive ability and personality, especially conscien-
tiousness. Our first three hypotheses deal with the
association between game-based assessment and the
three performance criteria. In order to explore these
hypotheses we executed three separated multiple
regression analyses for each one of the three criterion
measures. The results of these analyses are presented
in Table 2.
The results of the regression analyses show that flex-
ibility and decision-making are positively associated
with self-reported job performance and GPA respec-
tively. The block of the four skills predict 13%, 7% and
10% of the total variance in job performance, OCB and
GPA respectively. Therefore, H1 and H2 are partially
confirmed, whereas H3 is rejected. Subsequently, we
explored the incremental validity of the game-based
assessment. In order to explore H4-H6 we conducted a
number of hierarchical regression analyses, controlling
for the effect of cognitive ability and the five-factor
model of personality. The results of these analyses are
presented in Table 3.
The results of these analyses demonstrate that the
soft skills measured by the game-based assessment
do not predict additional variance in either job per-
formance or OCBs for the working individuals of
our sample, above the effect of cognitive ability and
personality rejecting thus H4 and H6. However, they
seem to have an important effect on GPA. More specif-
ically, both as a group and separately (adaptability and
decision making) demonstrate a statistical significant
relationship with GPA, above and beyond the effect of
cognitive ability and personality. These results estab-
lish the usefulness of game-based assessments in pre-
dicting educational attainment, as measured by the
GPA, both as a group and individually in the case of
adaptability and decision making.
Our study explores the effectiveness of a game-based
assessment in employee selection. Extending previous
Table 1. Inter-Correlation Matrix of Study’s Variables (N = 63–120)
Scales Range
SD 1 2 3 4 5 6 7 8 9 10 11 12 13
1. Cognitive ability 11 7.69 2.33
2. Extroversion 36 34.07 7.87 –.03
3. Agreeableness 25 42.05 5.32 .08 .47**
4. Conscientiousness 35 38.42 7.10 –.05 –.16 .00
5. Emotional Stability 33 29.15 7.67 .07 .20* .14 .21*
6. Openness to experience 29 36.78 6.07 .40 .16 .20* .04 –.05
7. Resilience 58 76.35 11.85 .10 .04 .11 .14 .31 .32**
8. Flexibility 58 64.98 12.71 .05 –.03 . 11 .07 .13 –.02 .20*
9. Adaptability 81 74.57 11.60 .03 .01 .07 –.12 –.09 .08 .40** .26**
10. Decision-making 46 76.42 9.49 –.00 .05 .12 .08 .12 –.03 .23* .03 .20*
11. Job Performance 14 26.21 3.07 .13 .04 .16 .40** .26* .32** .13 .22 –.07 .13
12. OCB 38 64.77 6.78 .05 .22 .39** .14 .19 .05 –.14 –.03 –.18 .12 .26*
13. GPA 3.1 7.39 0.72 .03 –.06 –.11 .13 –.03 .00 –.02 .08 –.18 .25** –.02 .07
14. Age 25 26.36 6.21 .07 –.18* .03 .11 –.04 .10 .12 .19* .11 –.12 .22 –.07 –.04
Note: *p < .05. **p < .01. ***p < .001.
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
6 Nikolaou et al.
research on Work/Organizational Psychology and tra-
ditional selection methods, we introduce a game-based
assessment designed to measure candidates’ soft skills
(e.g., adaptability, flexibility, decision-making) that is
found to be associated with self-reported measures of
performance. Our study contributes to employee selec-
tion research, providing some support to the use of
gamification in soft skills assessments and their ability
to predict performance in work and academic settings.
For example, a game-based assessment measuring
soft skills, such as decision-making and flexibility, can
predict test-takers’ self-reported job performance and
GPA. By incorporating game elements into assess-
ments that do not use self-reported measures, but
assess behavioral intentions, test-takers’ attractive-
ness and engagement into the assessment might be
enhanced, while it might be more difficult for them to
understand what is being assessed and what the cor-
rect answer is (Armstrong et al., 2016; Fetzer et al.,
2017). As such, the use of game elements and designs
might improve the validity of assessments.
Moreover, Armstrong et al. (2016) suggested that
game-based assessments, such as gamified simula-
tions, might be employed to assess important pre-
dictor constructs like learning agility in employee
selection settings where survey methodology may
not be adequate. Along these lines, our study extends
research on traditional selection methods, exploring
the incremental validity of a game-based assessment
assessing soft skills. Game-based assessments mea-
suring soft skills, such as adaptability and decision
making, can predict academic performance (e.g., GPA),
above and beyond traditional selection methods (e.g.,
cognitive ability and personality tests). However, the
soft skills measured by the game-based assessment do
not predict additional variance in either job perfor-
mance or OCBs, above the effect of cognitive ability
and personality.
To sum up, both personality and intelligent tests
have been extensively tested in academic contexts and
their validity in predicting GPA has been established.
The emergence of internet and technology as well as
Table 2. Hierarchical Regression Analysis of the GBA on the Three Criterion Measures
Job Performance (N = 63) OCB (n = 63) GPA (N = 113)
Resilience .14 .94 .13 2.10 –.12 –.79 .07 1.10 –.16 –1.58 .10 3.06
Flexibility .30* 2.20 .08 .58 .06 .58
Adaptability –.28 –1.85 –.18 1.14 .18 1.76
Decision making .17 1.29 .20 1.47 .25** 2.61
Note: OCB = Organizational Citizenship Behavior; GPA = Great Point Average.
*p < .05. **p < .01. ***p < .001.
Table 3. Hierarchical Regression Analysis of the GBA on the Three Criterion Measures controlling for Cognitive Ability and Personality
Job Performance (N = 63) OCB (n = 63) GPA (N = 113)
Predictors βtΔR2ΔFβtΔR2ΔFβtΔR2ΔF
Step 1
Cognitive ability .04 .30 .26 332.** .01 .10 .20 2.33* .09 .92 .04 .70
Extroversion –.05 –.30 .09 .55 .08 .77
Agreeableness .08 .53 .35* 2.27 –.20 –1.86
Conscientiousness .28* 2.07 .18 1.26 .18 1.83
Emotional Stability .06 .44 .08 .60 –.07 –.73
Openness to experience .30** 2.44 .02 .12 .02 .20
Step 2
Resilience –.02 –.15 .06 1.11 –.17 –1.05 .03 .57 –.20 –1.84 .12 3.73**
Flexibility .26 1.9 –.03 –.24 .07 .71
Adaptability –.19 –1.30 –.04 –.23 .22* 2.07
Decision making .16 1.15 .03 .19 .26s 2.73
Note: OCB = Organizational Citizenship Behavior; GPA = Great Point Average.
*p < .05 **p < .01. ***p < .001.
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
Game-based Assessment in Selection 7
the familiarity of new generations with games are
likely to reflect an increasing interest in the validity of
game-based assessments in predicting academic per-
formance beyond traditional selection methods. The
additive value of using a game-based assessment mea-
suring adaptability and decision making, both as a
group and individually, in predicting GPA beyond
personality (e.g., FFM) and cognitive ability tests (e.g.,
ICAR), has been established.
Our results are of interest to researchers and prac-
titioners of Work/Organizational Psychology interested
in the prediction of work and academic performance,
in that they support the incremental validity of a
game-based assessment over and above traditional
selection methods. They contribute to empirical
unknowns about the psychometrics properties and
effectiveness of the use of game-based assessments
in employee selection.
Game-based assessments might be used as a sup-
plement or replacement tool to traditional selection
methods as they add to the prediction of perfor-
mance of candidates or students. However, it is of high
importance to test the effectiveness of game-based assess-
ments using objective measures of performance, such
as supervisor’s ratings, and a test-retest reliability
methodology to establish further the psychometric
properties of the new assessment method. Moreover,
similar to SJTs, game-based assessments might improve
the information gathered about applicants during
the selection process as well as applicant reactions
(Armstrong et al., 2016). Gamification might increase
engagement levels which in turn might lead to reten-
tion and motivation during the process of selection
as well as better predictions about person-job fit
(e.g., Chamorro-Premuzic, Akhtar, Winsborough, &
Sherman, 2017). Using new technologies and game el-
ements in assessments, recruiters and HR professionals
might improve selection decisions making more robust
inferences about their performance as game-based
assessments do not use self-reported measures that
applicants are likely to fake (Fetzer et al., 2017).
Another reason that the use of traditional selection
methods might be reconsidered and replaced by new
game based tools is that the latter are popular among
younger generations. Organizations including game-
based assessments into the employee selection process
might provide a new technologically advanced experi-
ence to applicants sending thus signals about organi-
zational attributes (e.g., innovation) and making the
process more fun.
The present study is not without limitations. First
of all, performance outcomes were assessed via self-
report measures. Although it is suggested that objec-
tive measures are the best indicators of individual
employee performance, the unavailability of such
measurements has forced many previous studies to
use self-reported measures of performance (Pransky
et al., 2006). The use of objective measures or supervi-
sor’s report of employee’ performance would lead to
more robust findings about the predictive validity of
the game-based assessment. Also, some of the GBA’s
dimensions were not found to predict performance.
One reason might be the use of self-reported mea-
sures of performance. “It is likely that self-report and
objective measures provide information on distinct, dif-
ferent aspects of work performance. Objective measures,
even in jobs that are apparently routine and straightfor-
ward, can present challenging levels of complexity, and
may provide an estimation of only one dimension of actual
job performance.” (Pransky et al., 2006, p. 396). Future
research should explore the ability of the GBA to
predict one dimension of performance (e.g., resil-
ience or adaptability) using supervisory ratings or
objective performance data.
To establish further the effectiveness of the use of
gamification in employee selection, future research
should also explore applicants’ reactions. For example,
candidates perceive multimedia tests as more valid
and enjoyable and as a result, they are more satisfied
with the selection process while organizational attrac-
tiveness and positive behavioral intentions are
increased (Oostrom, Born, & van der Molen, 2013). The
impact of game-based assessments on perceived fair-
ness, organizational attractiveness and job pursuit
behaviors should also be investigated to support fur-
ther their suitability in the selection process. Also,
the current study does not address competence and
previous experience with technology, which might
influence test-takers’ performance. For example,
candidates who have experience with on-line games
and/or feel competent to use new technology might
have less anxiety when new technology is used (Cascio
& Montealegre, 2016), and as a result, perform better in
a game-based assessment. In general, the limited
knowledge and lack of empirical research on the use of
gamification in employee selection has made the estab-
lishment of a game-based assessment as an effective
selection method even more challenging.
Future research should also explore the role of
demographic variables on individuals’ performance in
game-based assessments. Instead of using demographic
variables simply as mere control variables in theory
testing, Spector and Brannick (2011) suggest to rethink
the use of demographics in the first place focusing on the
mechanisms that explain relations with demographics
rather than on the demographic variables that serve as
proxies for the real variables of interest.
Finally, the study might suffer from common method
variance effects, since we only used self-reported mea-
sures. In order to reduce its effect, we asked the
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
8 Nikolaou et al.
participants to complete the measures in two separate
occurrences. Moreover, the Harman’s single factor test
we conducted following the guidelines of Podsakoff,
Mackenzie, Lee, and Podsakoff (2003) discouraged the
impact of common method variance on our results.
Game-based assessments have recently appeared
in employee selection calling for further research on
their validity. Our study contributes to research on
employee selection methods by examining the crite-
rion related validity of a game-based assessment mea-
suring soft skills. Findings of our study indicate that
assessments incorporating game elements might pre-
dict self-rated job performance, and academic per-
formance, as measured by GPA. Moreover, exploring
the incremental validity of the game-based assessment
method, we provided evidence that it can predict GPA
above and beyond the effect of traditional selection
methods, such as personality and cognitive ability tests.
These results could change the way organizations and
colleges approach traditional assessment methods
making the use of gamification in work and academic
contexts more widespread in the future.
Armstrong M. B., Landers R. N., & Collmus A. B. (2016).
Gamifying recruitment, selection, training, and performance
management: Game-thinking in human resource
management. In D. Davis & H. Gangadharbatla (Eds.),
Emerging research and trends in gamification (pp. 140–165).
Hershey, PA: IGI Global.
Avey J. B., Reichard R. J., Luthans F., & Mhatre K. H. (2011).
Meta-analysis of the impact of positive psychological
capital on employee attitudes, behaviors, and
performance. Human Resource Development Quarterly, 22(2),
Barrick M. R., Mount M. K., & Judge T. A. (2001).
Personality and performance at the beginning of the
new millennium: What do we know and where do
we go next? International Journal of Selection and
Assessment, 9(1-2), 9–30.
Bhattacharya M., Gibson D. E., & Doty D. H. (2005). The
effects of flexibility in employee skills, employee
behaviors, and human resource practices on firm
performance. Journal of Management, 31(4), 622–640.
Brackett M. A., Rivers S. E., Reyes M. R., & Salovey P.
(2012). Enhancing academic performance and social and
emotional competence with the RULER feeling words
curriculum. Learning and Individual Differences, 22(2),
Cascio W. F., & Montealegre R. (2016). How technology is
changing work and organizations. Annual Review of
Organizational Psychology and Organizational Behavior, 3(1),
Chamorro-Premuzic T., Akhtar R., Winsborough D., &
Sherman R. A. (2017). The datafication of talent:
How technology is advancing the science of human
potential at work. Current Opinion in Behavioral Sciences,
18, 13–16.
Chamorro-Premuzic T., & Furnham A. (2008). Personality,
intelligence and approaches to learning as predictors of
academic performance. Personality and Individual
Differences, 44(7), 1596–1603.
Chiabur D. S., Oh I.-S., Berry C. M., Li N., & Gardner R. G.
(2011). The five-factor model of personality traits and
organizational citizenship behaviors: A meta-analysis.
Journal of Applied Psychology, 96, 1140–1166. https://doi.
Condon D. M., & Revelle W. (2014). The international
cognitive ability resource: Development and initial
validation of a public-domain measure. Intelligence,
43, 52–64.
Côte S., & Miners C. T. H. (2006). Emotional intelligence,
cognitive intelligence, and job performance. Administrative
Science Quarterly, 51(1), 1–28.
Fetzer M., McNamara J., & Geimer J. L. (2017).
Gamification, serious games and personnel selection. In
H. W. Goldstein, E. D. Pulakos, J. Passmore, & C. Semedo
(Eds.), The Wiley Blackwell handbook of the psychology of
recruitment, selection and employee retention (pp. 293–309).
West Sussex, UK: John Wiley & Sons Ltd.
Georgiou K., Nikolaou I., & Gouras A. (2017). Serious
gaming in employees’ selection process. In I. Nikolaou,
Alliance for Organizational Psychology Invited
Symposium-The impact of technology on recruitment and
selection: An international perspective. Paper presented at
the 32nd Annual Conference of the Society for Industrial and
Organizational Psychology, Orlando, USA.
Goldberg L. R., Johnson J. A., Eber H. W., Hogan R.,
Ashton M. C., Cloninger C. R., & Gough H. G. (2006).
The international personality item pool and the future of
public-domain personality measures. Journal of Research in
Personality, 40(1), 84–96.
Golden W., & Powell P. (2000). Towards a definition
of flexibility: In search of the Holy Grail? Omega, 28(4),
Gray A., (2016). The 10 skills you need to thrive in the Fourth
Industrial Revolution. Retrieved from The World Economic
Forum website:
Hamtiaux A., Houssemand C., & Vrignaud P. (2013).
Individual and career adaptability: Comparing models
and measures. Journal of Vocational Behavior, 83(2), 130–141.
International Cognitive Ability Resource (ICAR) (2014).
[Public-domain assessment tool]. Retrieved from https://
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
Game-based Assessment in Selection 9
Kehoe J. F. (2000). Managing selection in changing
organizations: Human resource strategies. San Francisco,
CA: Jossey-Bass Publ.
Kwan H.-K., & Mao Y. (2011). The role of citizenship
behavior in personal learning and work–family
enrichment. Frontiers of Business Research in China, 5(1),
Lim B.-C., & Ployhart R. E. (2006). Assessing the convergent
and discriminant validity of Goldberg’s international
personality item pool: A multitrait-multimethod
examination. Organizational Research Methods, 9(1), 29–54.
Luthans F. (2002). The need for and meaning of positive
organizational behavior. Journal of Organizational Behavior,
23(6), 695–706.
Martin A. J., & Marsh H. W. (2006). Academic resilience and
its psychological and educational correlates: A construct
validity approach. Psychology in the Schools, 43(3), 267–281.
Martin A. J., & Marsh H. W. (2008). Academic buoyancy:
Towards an understanding of students’ everyday
academic resilience. Journal of School Psychology, 46(1),
McKinsey & Company (Producer). (2017). The digital
future of work: What skills will be needed? [Video].
Available from
Miller D., & Lee J. (2001). The people make the process:
Commitment to employees, decision making, and
performance. Journal of Management, 27(2), 163–189.
Morgeson F. P., Campion M. A., Dipboye R. L.,
Hollenbeck J. R., Murphy K., & Schmitt N. (2007).
Are we getting fooled again? Coming to terms with
limitations in the use of personality tests for personnel
selection. Personnel Psychology, 60(4), 1029–1049.
Nelson G. D. (1984). Assessment of health decision making
skills of adolescents. Retrieved from ERIC database
Nikolaou I., & Robertson I. T., IV. (2001). The Five-Factor
model of personality and work behavior in Greece.
European Journal of Work and Organizational Psychology, 10(2),
Ones D. S., Dilchert S., & Viswesvaran C. (2012). Cognitive
abilities. In N. Schmitt (Ed.), The Oxford handbook of
personnel assessment and selection (pp. 179–224). New York,
NY: Oxford University Press.
Oostrom J. K., Born M. P., & van der Molen H. T.
(2013). Webcam tests in personnel selection. In D. Derks &
A. Bakker (Eds.), The psychology of digital media at work
(pp. 166–180). USA & Canada: Psychology Press.
Parker J. D. A., Creque R. E., Barnhart D. L., Harris J. I.,
Majeski S. A., Wood L. M., ... Hogan M. J. (2004).
Academic achievement in high school: Does emotional
intelligence matter? Personality and Individual Differences,
37(7), 1321–1330.
Paul H., Bamel U. K., & Garg P. (2016). Employee
resilience and OCB: Mediating effects of organizational
commitment. Vikalpa, 41(4), 308–324. https://doi.
Picchi A. (2016, August 31). Do you have the “soft skills”
employers badly need? Retrieved from
Ployhart R. E., & Bliese P. D. (2006). Individual adaptability
(I–ADAPT) theory: Conceptualizing the antecedents,
consequences, and measurement of individual differences
in adaptability. In C. Shawn Burke, Linda G. Pierce, &
Eduardo Salas (Eds.) Advances in human performance and
cognitive engineering research (Vol. 6, 3–39). Bingley, UK:
Emerald Group Publishing Limited.
Podsakoff P. M., MacKenzie S. B., Lee J.-Y., & Podsakoff
N. P. (2003). Common method biases in behavioral research:
A critical review of the literature and recommended
remedies. Journal of Applied Psychology, 88(5), 879–903.
Poropat A. E. (2009). A meta-analysis of the five-factor model
of personality and academic performance. Psychological
Bulletin, 135(2), 322–338.
Pransky G., Finkelstein S., Berndt E., Kyle M., Mackell J., &
Tortorice D. (2006). Objective and self-report work
performance measures: A comparative analysis.
International Journal of Productivity and Performance
Management, 55(5), 390–399.
Rohde T. E., & Thompson L. A. (2007). Predicting
academic achievement with cognitive ability.
Intelligence, 35(1), 83–92.
Roth P. L., BeVier C. A., Switzer F. S., III., & Schippmann J. S.
(1996). Meta-analyzing the relationship between grades and
job performance. Journal of Applied Psychology, 81(5), 548–556.
Ryan A. M., & Ployhart R. E. (2014). A century of selection.
Annual Review of Psychology, 65(1), 693–717. https://doi.
Schmitt N. (2014). Personality and cognitive ability as
predictors of effective performance at work. Annual Review
of Organizational Psychology and Organizational Behavior,
1(1), 45–65.
Shute V. (2015, August). Stealth assessment in video
games. Paper presented at the Australian Council for
Educational Research Research “Conference Learning
Assessments: Designing the future conference”. Melbourne,
Shute V. J., Ventura M., Bauer M., & Zapata-Rivera D.
(2009). Melding the power of serious games and
embedded assessment to monitor and foster learning. In
U. Ritterfeld, M. Cody, & P. Vordered (Eds.), Serious games:
Mechanisms and effects (pp. 295–321). New York and
London: Routledge, Taylor & Francis.
Smith C. A., Organ D. W., & Near J. P. (1983).
Organizational citizenship behavior: Its nature and
antecedents. Journal of Applied Psychology, 68(4), 653–663.
Somech A. (2010). Participative decision making
in schools: A mediating-moderating analytical
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
10 Nikolaou et al.
framework for understanding school and teacher
outcomes. Educational Administration Quarterly,
46(2), 174–209.
Spector P. E., & Brannick M. T. (2011). Methodological
urban legends: The misuse of statistical control
variables. Organizational Research Methods, 14(2),
Strenze T. (2007). Intelligence and socioeconomic success:
A meta-analytic review of longitudinal research. Intelligence,
35(5), 401–426.
van Iddekinge C. H., Lanivich S. E., Roth P. L., & Junco E.
(2016). Social media for selection? Validity and adverse
impact potential of a Facebook-based assessment. Journal
of Management, 42(7), 1811–1835.
Downloaded from Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
... With the fast and dynamic change in the workspace, organizations are these days confronting considerably harder rivalry (Fetzer, et al., 2017;Nikolaou, et al., 2019). Digitalization and globalization have set administrations to think out about the container to get maintainability and benefit inside the business sectors (Nikolaou, et al., 2019). ...
... With the fast and dynamic change in the workspace, organizations are these days confronting considerably harder rivalry (Fetzer, et al., 2017;Nikolaou, et al., 2019). Digitalization and globalization have set administrations to think out about the container to get maintainability and benefit inside the business sectors (Nikolaou, et al., 2019). The patterns have now additionally upset with the incorporation of sudden factors like a pandemic looking like COVID19 which the world is as yet confronting and none of the organizations can yet anticipate the fate of the dependability of the workspace (Fitriani, 2020). ...
... The patterns have now additionally upset with the incorporation of sudden factors like a pandemic looking like COVID19 which the world is as yet confronting and none of the organizations can yet anticipate the fate of the dependability of the workspace (Fitriani, 2020). Thinking about the reality, this investigation has been intended to contemplate the most recent patterns identified with the work space to suggest better freedoms which organizations can adapt on schedule to address the issues of the clients and have legitimate supportability and development in the worldwide and homegrown business sectors (Fetzer, et al., 2017;Nikolaou, et al., 2019). Advances in technology have propelled organizations toward the adoption of internet-based recruitment methods (Anderson, 2003, Kraichy & Chapman, 2014. ...
Full-text available
Gamified recruitment is the utilization of game design in a non-game setting to influence, connect with, and inspire desired practices. Recently, most organizations have employed the utilization of gamification with the end goal of employee recruitment. The result of these gamified recruitment methods is purportedly positive, although, no observational studies to date have been conducted to decide their effectiveness. This paper was designed to give an objective and empirical analysis of the effectiveness of gamified recruitment method. With more emphasis on three components of the gamified recruitment i.e. economic value, persuasive value, and informative value. Results demonstrated that gamified recruitment in correspondence to the economic value, persuasive value, and informative value is more successful and effective in employee performance in organizations. Implications of these findings are analyzed and discussed in this paper.
... Several researchers have supported the suggestion that applicants may experience difficulty faking a GBA, based on specific design decisions and characteristics that are built into the game (e.g., Nikolaou et al., 2019). However, limited research in the area of faking in GBAs highlights a gap in the research literature. ...
... However, limited research in the area of faking in GBAs highlights a gap in the research literature. There have been several calls for additional research to explore the potential impact that GBA design can have on faking behaviors Gkorezis et al., 2021;Ihsan & Furnham, 20108;Mislevy et al., 2016;Nikolaou et al., 2019). ...
... When given instructions to fake, participants were less reluctant to fake in the video game condition. Additionally, Nikolaou et al. (2019) discussed that the nature of a gamified version of a Situational Judgement Test (SJT) would make it more difficult for individuals to present the most desirable behavior due to the ambiguity of the constructs that are being assessed in the test. Without the transparency of what is being measured, the SJT would be more likely to elicit natural responses from applicants. ...
Experiment Findings
Full-text available
Abstract The use of game-based approaches is growing in popularity both in research and practice. However, little research has been done on faking behaviors in game-based assessments (GBAs). Understanding faking in GBAs is relevant as organizations continue to develop and integrate GBAs for selection purposes. This study examines the relationships of faked GBA scores with honest and faked scores from a self-report measure of personality. We collected measures of personality using the Five Factor Model, evaluating four traits relevant to a sales manager position (i.e., high conscientiousness, high extraversion, low neuroticism, and low agreeableness). From our group of participants, we evaluated the degree to which participants faked the self-report measures (i.e., faking extent). We used this measure to identify individuals who faked well (i.e., correctly distorted scores across personality subfactors). These good fakers were compared to poor fakers with results demonstrating significantly improved faked self-report scores but not faked GBA personality scores. This provides preliminary evidence that good fakers can generally manipulate faked scores in the desirable way on self-report measures but may have experienced more difficulty manipulating their scores on the GBA measures used in this particular study. Our findings may be relevant for researchers and practitioners seeking to use GBAs in situations where test-takers may have an incentive to fake (e.g., recruitment and hiring practices).. Our results also contribute to a much-needed research area exploring the various uses and functions of GBAs when compared to traditional measures. Keywords: game-based assessments, self-report, faking extent, personality assessment, Big Five
... On the local front in Malaysia, [11] stated that soft skills that would need to be assessed amongst graduates in Higher Education Institutions for work readiness include Responsibility, Positivity, Time Management, Teamwork, Communication, Leadership, Creative and Innovation . More specific toward this research study of using games in assessing graduate employability [12] elaborated on eight(8) soft skills that seemed to become more and more relevant and prevalent in today's demanding working environment which include Resilience, Adaptability, Flexibility/Willingness to Change and Decision making, Teamwork, Learning Agility, Accountability and Integrity. This skill taxonomy had been adopted to an online Game Based Assessment (OWIWI) deployed by his team which has been tested and deployed. ...
... Here is was stated that SJTs connected with other evidence based assessment increases the measurable validity in predicting job performance. In a more related study/findings by [12] reported that converting a SJT (Situational Judgment Test) to an adventure story with game elements confirms the construct validity of the measure of Resilience, Adaptability, Flexibility and Decision Making hence revealing positive results. In his research a fully working game called OWIWI is used which is also currently a readily available Game service provided by the authors. ...
... The soft skills used for assessments are based on the eight (8) skills adopted in the Game Based Assessment (OWIWI) and adopted in the study done by [41] and [12]. These soft skills include Resilience, Adaptability, Accountability, Willingness to Change, Decision Making, Integrity, Resilience and Teamwork. ...
Full-text available
Emerging tools such as Game Based Assessments have been valuable in talent screening and matching soft skills for job selection. However, these techniques/models are rather stand alone and are unable to provide an objective measure of the effectiveness of their approach leading to mismatch of skills. In this research study, we are proposing a Theoretical Hybrid Model, combining aspects of Artificial Intelligence and Game Based Assessment in profiling, assessing and ranking graduates based on their soft skills. Firstly, an Intelligent Controller is used to extract and classify the graduate skill profile based on data findings extracted using traditional assessment methods of self-evaluation and interview. With motivation and engagement as a competitive difference, an existing Game Based Assessment (OWIWI) is then used to assess the soft skills of these graduates hence generating a Graduate Profile based on results of the game. Moving forward, a ranking technique is then applied to match the profile to selected job requirements based on soft skills required for the job and the graduate strength. Finally, a comparison analysis is concluded based on the soft skills profile obtained before employment (pre-employment) and objective measure feedback of soft skills obtained after employment (post-employment) to provide a validity check to study the effectiveness of the overall Hybrid Model. Specifically, data obtained from this study can be useful in solving issues of unemployment due to mismatch of soft skills at the Higher Learning Institution level.
... The commercial use of game-based assessment has increased substantially in recent years. For example, personality (Barends et al., 2019;McCord et al., 2019) and non-cognitive constructs such as resilience, adaptability and flexibility Nikolaou et al., 2019) have been assessed for personnel selection via games. However, the rapid expansion of commercial gamified assessments has attracted reservations about their psychometric properties. ...
... On a related note, the incremental and criterion-related validity of this methodology is yet to be established, above and beyond existing measures of resilience. Iterative validation of game-and simulation-based assessments includes determining their utility in predicting real-world outcomes (and being implemented in high-stakes environments; Georgiou et al., 2019;Nikolaou et al., 2019). These outcomes could be subjective or objective, for instance, attrition rates and posttraumatic stress trajectories in military personnel (Bonanno, 2012); game performance consistency and injury rehabilitation in competitive athletes ; or job performance and burnout in employees such as healthcare professionals (Robertson et al., 2015). ...
Full-text available
Modern technologies have enabled the development of dynamic game- and simulation-based assessments to measure psychological constructs. This has highlighted their potential for supplementing other assessment modalities, such as self-report. This study describes the development, design, and preliminary validation of a simulation-based assessment methodology to measure psychological resilience—an important construct for multiple life domains. The design was guided by theories of resilience, and principles of evidence-centered design and stealth assessment. The system analysed log files from a simulated task to derive individual trajectories in response to stressors. Using slope analyses, these trajectories were indicative of four types of responses to stressors: thriving, recovery, surviving, and succumbing. Using Machine Learning, the trajectories were predictive of self-reported resilience (Connor-Davidson Resilience Scale; Connor & Davidson, 2003) with high accuracy, supporting construct validity of the simulation-based assessment. These findings add to the growing evidence supporting the utility of gamification in assessment. Importantly, these findings address theoretical debates about the construct of resilience, adding to its theory, supporting the combination of the ‘trait’ and ‘process’ approaches to its operationalization.
... However, some researchers have argued that self-report measures may tend to have lower construct validities due to contaminants such as faking or other forms of social desirability, and thus they have called for alternatives to self-report measures of personality that might prevent faking and improve construct validity (Morgeson et al., 2007). By measuring behavioral aspects of personality, GBAs reflect one alternative that offers this promise (Armstrong et al., 2016;Nikolaou et al., 2019). However, it is important to note that GBAs are not monolithic; in fact, one could create an infinite number of possible GBAs, because GBAs are a method, not a construct (Arthur & Villado, 2008;Landers, 2019). ...
... Moreover, if organizations use GBAs to hire employees, the goal of validly measuring job-relevant constructs must be satisfied. Although our goal was to simply demonstrate if GBAs can be used to measure personality and not explicitly whether they can be used for personnel selection purposes, similar GBAs do have the potential to be used in those settings(Armstrong et al., 2016;Nikolaou et al., 2019), especially if they are modified to reflect a workplace setting. For example, if a particular job requires extensive teamwork and effective soft skills, GBAs might include actions that require collaboration or communication with other players and be presumed to measure related personality traits and skills. ...
Full-text available
Using game-based assessments (GBAs) to assess and select job applicants presents the dual challenges of measuring intended job-relevant constructs while analyzing GBA data that contain more predictors than observations. Exploring those challenges, we analyzed two GBAs that were designed to measure conscientiousness facets (i.e., achievement striving, self-discipline, and cautiousness). Scores on traditional measures of personality and cognitive ability were modeled using either a restricted set of GBA predictors using cross-validated ordinary least squares (OLS) regression or by the fuller set (p = 248) using random forests regression. Overall, the prediction of personality was near-zero; but the latter approach explained 14%–30% of the variance in predicting cognitive ability. Our findings warn of GBAs potentially measuring unintended constructs rather than their intended constructs. Practitioner points • Game-based assessment (GBA) vendors often claim to measure individual differences, but related GBA research is often lacking. • The current study developed two GBAs intended to measure conscientiousness and its facets. • Results demonstrated that our GBAs were measuring cognitive ability instead of conscientiousness, despite deliberate design attempts to avoid the former. • Results suggested the usefulness of machine learning to understand GBA data and the constructs underlying them.
... This posits a risk for GAs because in these cases, test scores are conflated and job candidates with little or no prior gaming experience may perceive the use of GAs in a highstakes context as unfair . Therefore, recent studies have called to take the impact of technology usage into account when investigating GAs for personnel selection (e.g., Armstrong et al., 2016;Nikolaou et al., 2019). The present study addressed this concern and found that neither GSST performance nor acceptance were dependent on participants' familiarization and experience with such methods. ...
Full-text available
The present study contributes to the emerging field of gamification in personnel selection by examining validity and acceptance of the Gamified Set-Shifting Task (GSST), which is based on a well-established neuropsychological test of cognitive flexibility, the Wisconsin Card Sorting Test (WCST). Results based on a sample of 180 participants in an online study provided preliminary support for construct and criterion-related validity. The GSST was better accepted among test-takers than both the WCST and a cognitive ability test. Overall, the findings suggest that the GSST may be an attractive and valid method to assist organizations in selecting employees who are able to adapt to changing environments.
Despite the growing interest in utilizing commercial off-the-shelf (COTS) games for instructional and assessment purposes there is a lack of research evidence regard- ing COTS games for these applications. This chapter considers the application of COTS games for instruction and assessment and provides preliminary evidence com- paring COTS game scores to traditional multiple-choice assessments. In a series of four studies, we collected data and compared results from the performance in a COTS game to scores on a traditional multiple-choice assessment written for the purposes of each study. Each assessment was written to evaluate the same content presented in the game for each respective study. Three of the four studies demonstrated a significant correlation between the COTS game and the traditional multiple choice assessment scores. The non-significant value in Study 4 was likely due to a small sample size (n < 100). The results of these studies support our hypothesis and demonstrate that COTS games may be a useful educational tool for training or assessment purposes. We recommend that future research focuses on specific applications of COTS games to explore further opportunities for utilizing COTS in education and assessment.
Full-text available
Virtual reality (VR) is a potential assessment format for constructs dependent on certain perceptual characteristics (e.g., realistic environment and immersive experience). The purpose of this series of studies was to explore methods of evaluating reliability and validity evidence for virtual reality assessments (VRAs) when compared with traditional assessments. We intended to provide the basis of a framework on how to evaluate VR assessments given that there are important fundamental differences to VR assessments compared with traditional assessment formats. Two commercial off-the-shelf (COTS) games (i.e., Project M and Richie's Plank Experience) were used in Studies 1 and 2, while a game-based assessment (GBA; Balloon Pop, designed for assessment) was used in Study 3. Studies 1 and 2 provided limited evidence for the reliability and validity of the VRAs. However, no meaningful constructs were measured by the VRA in Study 3. Findings demonstrate limited evidence for these VRAs as viable assessment options through the validity and reliability methods utilized in the present studies, which in turn emphasize the importance of aligning the assessment purpose to the unique advantages of a VR environment. Practitioner points • Findings were mixed in correlating the VRA scores with similar assessments to the intended constructs being measured. • Details are provided on the design and scoring for the presented VRAs. • Although research using VRAs is still preliminary, there are promising methods through which we might design unique behavior based evaluation.
Full-text available
The present study contributes to the emerging field of gamification in personnel selection by examining validity and acceptance of the Gamified Set-Shifting Task (GSST), which is based on a well-established neuropsychological test of cognitive flexibility, the Wisconsin Card Sorting Test (WCST). Results based on a sample of 180 participants in an online study provided preliminary support for construct and criterion-related validity. The GSST was better accepted among test-takers than both the WCST and a cognitive ability test. Overall, the findings suggest that the GSST may be an attractive and valid method to assist organizations in selecting employees who are able to adapt to changing environments. Practitioner points • Cognitive flexibility—adapting to new, changing, and unexpected circumstances— may become increasingly important in modern work environments. • We redesigned a neuropsychological test into a gamified instrument for means of measuring applicants' cognitive flexibility. • Game scores are positively associated with academic performance and self-reported adaptability. • Study participants appeared to prefer the gamified tool over traditional means of personnel selection.
Introduction. Psychological and pedagogical support for the professionalization of a teacher should be based on modern knowledge in the development of pedagogical abilities. The most popular strategy for studying abilities is still the analytical approach, which does not allow to consider abilities as a holistic, continuously developing psychological education. The purpose of the article is to present the results of research into the development of the psychological system of pedagogical abilities of students – future primary school teachers in the context of continuous pedagogical education at secondary school, pedagogical college and pedagogical university. Materials and Methods. The survey involved 201 subjects, including students of the pedagogical class (n = 15), students of the vocational pedagogical college (n = 82), students of the pedagogical university (n = 104). In order to study the psychological system of pedagogical abilities, test methods, questionnaires, self-assessment scales, as well as an analysis of indicators of academic performance were used. The analysis of the psychological system of abilities was made by using systemic indices (coherence, divergence, organization), assessing the homogeneity/ heterogeneity of structures, identifying the basic and leading components of the system. Results. The uneven development of the psychological system of pedagogical abilities at different levels of pedagogical education was shown. It has been established that the development of the system of pedagogical abilities during the period of study at school, college and university cannot be characterized as continuous. Students of the pedagogical class are characterized by a higher organization of the ability system than those in college and university; for college students – their continuous restructuring and a decrease in organization by the end of their studies; for university students – a more stable and progressively developing system of abilities. Qualitative differences in the structural organization of the system are determined by the different contributions of the types of abilities (individual, subjective, personal) to its functioning and achievement of academic results. The organization of the system of abilities of schoolchildren is largely determined by the abilities of an individual, college students – by the abilities of a subject of activity and personality, university students – by the abilities of an individual. Discussion and Conclusion. The conclusions formulated in the research paper serve the basis for the organization of special activities on the purposeful formation of the system of pedagogical abilities in the conditions of continuous pedagogical education.
Full-text available
Executive Summary With increased popularity of positive psychology, there is a greater emphasis on exploring positive human resource strengths to address the workplace challenges and augment organizational performance. Previous research suggests that resilience positively relates to desired employee attitudes, behaviours, and performance such as organizational citizenship behaviour (OCB). However, it would be intriguing to understand the underlying mechanism of resilience-OCB relationship. Towards this, the study examines the mediating role of organizational commitment. In the light of identified research gaps, the study explores the mechanism of the relationship between resilience and OCB in the context of Indian organizations. The study sample comprised employees (N = 345) working in the manufacturing industries of Uttarakhand and Himachal Pradesh in India. Data were collected with the help of self-administered questionnaires through systematic random sampling. A model was developed and tested in which the effects of resilience on OCB were hypothesized to be mediated by organizational commitment. The hypotheses testing was done using hierarchical multiple regression and for testing the mediating effects, bootstrapping in SPSS was used. The results provide empirical evidence for the positive relationship between resilience and OCB. Also, the results emphasize that resilience influence organizational commitment as well. As hypothesized, results have also supported the mediating effect of organizational commitment in the relationship between resilience and OCB, explaining the underlying mechanism of resilience-OCB relationship. The mediation is partial which means that resilience influences OCB directly as well as indirectly through organizational commitment. The study offers significant advancements for both resilience and OCB research. The results also offer direction to organizations which desire to stimulate and maintain employee outcomes in their organizations for competitive advantage. Employee outcomes in organizations can be improved by developing resilience among its employees. Implications of promoting resilience at workplace are discussed.
Full-text available
Given the rapid advances and the increased reliance on technology, the question of how it is changing work and employment is highly salient for scholars of organizational psychology and organizational behavior (OP/OB). This article attempts to interpret the progress, direction, and purpose of current research on the effects of technology on work and organizations. After a review of key breakthroughs in the evolution of technology, we consider the disruptive effects of emerging information and communication technologies. We then examine numbers and types of jobs affected by developments in technology, and how this will lead to significant worker dislocation. To illustrate technology's impact on work, work systems, and organizations, we present four popular technologies: electronic monitoring systems, robots, teleconferencing, and wearable computing devices. To provide insights regarding what we know about the effects of technology for OP/OB scholars, we consider the results of research conducted from four different perspectives on the role of technology in management. We also examine how that role is changing in the emerging world of technology. We conclude by considering approaches to six human resources (HR) areas supported by traditional and emerging technologies, identifying related research questions that should have profound implications both for research and for practice, and providing guidance for future research.
Full-text available
Academic buoyancy is developed as a construct reflecting everyday academic resilience within a positive psychology context and is defined as students' ability to successfully deal with academic setbacks and challenges that are typical of the ordinary course of school life (e.g., poor grades, competing deadlines, exam pressure, difficult schoolwork). Data were collected from 598 students in Years 8 and 10 at five Australian high schools. Half-way through the school year and then again at the end of the year, students were asked to rate their academic buoyancy as well as a set of hypothesized predictors (self-efficacy, control, academic engagement, anxiety, teacher-student relationship) in the area of mathematics. Multilevel modeling found that the bulk of variance in academic buoyancy was explained at the student level. Confirmatory factor analysis and structural equation modeling showed that (a) Time 1 anxiety (negatively), self-efficacy, and academic engagement significantly predict Time 1 academic buoyancy; (b) Time 2 anxiety (negatively), self-efficacy, academic engagement, and teacher-student relationships explain variance in Time 2 academic buoyancy over and above that explained by academic buoyancy at Time 1; and (c) of the significant predictors, anxiety explains the bulk of variance in academic buoyancy.
Full-text available
Game-thinking is beginning to appear in a wide variety of non-game contexts, including organizational support settings like human resource management (HRM). The purpose of this chapter is two-fold: 1) to explore the opportunities for game-thinking via gamification and serious games in HRM based on current and previous HRM literature and 2) to identify future research areas at the intersection of game-thinking and HRM. Prevailing HRM theories will be applied to the use of game-thinking in different sub-fields of HRM, including recruitment, selection, training, and performance management.
Full-text available
Recent reports suggest that an increasing number of organizations are using information from social media platforms such as to screen job applicants. Unfortunately, empirical research concerning the potential implications of this practice is extremely limited. We address the use of social media for selection by examining how recruiter ratings of Facebook profiles fare with respect to two important criteria on which selection procedures are evaluated: criterion-related validity and subgroup differences (which can lead to adverse impact). We captured Facebook profiles of college students who were applying for full-time jobs, and recruiters from various organizations reviewed the profiles and provided evaluations. We then followed up with applicants in their new jobs. Recruiter ratings of applicants’ Facebook information were unrelated to supervisor ratings of job performance (rs = −.13 to –.04), turnover intentions (rs = −.05 to .00), and actual turnover (rs = −.01 to .01). In addition, Facebook ratings did not contribute to the prediction of these criteria beyond more traditional predictors, including cognitive ability, self-efficacy, and personality. Furthermore, there was evidence of subgroup difference in Facebook ratings that tended to favor female and White applicants. The overall results suggest that organizations should be very cautious about using social media information such as Facebook to assess job applicants.
This chapter considers definitions and the boundaries between gamification and serious games. It then primarily concentrates on serious games, recognizing that this covers gamification in general and the common challenges and potential benefits. The chapter also covers current uses of serious games. Serious games may have a marked impact on the field of personnel selection. The chapter further discusses the rationale for using gaming techniques for personnel selection and offer practical guidelines for leveraging this methodology in a selection context. Developing and implementing serious games for personnel selection requires adherence to the same psychometric and legal considerations as any other selection tool, but there are some unique aspects that also need to be considered. These are grouped into the following categories: objectives, design and utilization. Each aspect is discussed in turn. Finally, future directions in research and application of gamification and serious games are discussed.
This article reviews three innovations that not only have the potential to revolutionize the way organizations identify, develop and engage talent, but are also emerging as tools used by practitioners and firms. Specifically, we discuss (a) machine-learning algorithms that can evaluate digital footprints, (b) social sensing technology that can automatically decode verbal and nonverbal behavior to infer personality and emotional states, and (c) gamified assessment tools that focus on enhancing the user-experience in personnel selection. The strengths and limitations of each of these approaches are discussed, and practical and theoretical implications are considered.
Work organizations and the employees within these organizations face considerable environmental pressures requiring adaptive change. Several forces have contributed to this need for great adaptation. These are described in many excellent sources (e.g., Cascio, 2003; Ilgen & Pulakos, 1999); here we briefly review their implications for individual adaptability.
Conclusions about the validity of cognitive ability and personality measures based on meta-analyses published mostly in the past decade are reviewed at the beginning of this article. Research on major issues in selection that affect the use and interpretation of validation data are then discussed. These major issues include the dimensionality of personality, the nature and magnitude of g in cognitive ability measures, conceptualizations of validity, the nature of the job performance domain, trade-offs between diversity and validity, reactions to selection procedures, faking on personality measures, mediator and moderator research on test–performance relationships, multilevel issues, Web-based testing, the situational framing of test stimuli, and the context in which selection occurs.