ArticlePDF Available

Abstract

Our study explores the validity of a game-based assessment method assessing candidates’ soft skills. Using self-reported measures of performance, (job performance, Organizational Citizenship Behaviors (OCBs), and Great Point Average (GPA), we examined the criterion-related and incremental validity of a game-based assessment, above and beyond the effect of cognitive ability and personality. Our findings indicate that a game-based assessment measuring soft skills (adaptability, flexibility, resilience and decision making) can predict self-reported job and academic performance. Moreover, a game-based assessment can predict academic performance above and beyond personality and cognitive ability tests. The effectiveness of gamification in personnel selection is discussed along with research and practical implications introducing recruiters and HR professionals to an innovative selection technique.
The Spanish Journal of Psychology (2019), 22, e6, 1–10.
© Universidad Complutense de Madrid and Colegio Oficial de Psicólogos de Madrid
doi:10.1017/sjp.2019.5
In order to gain a competitive advantage and make a
profit from their activities, organizations need a good
strategy. But to gain a sustainable competitive advan-
tage, that can last a long time and should not be easily
imitated by competitors; organizations must have the
people resources in place to successfully implement
the strategy. Along these lines, the need to screen out
talented prospective employees possessing the required
skills to fit the job and meet the performance standards
is apparent for every business. Traditional selection
methods, such as general mental ability and personality
tests, predict job performance to some extent (Ryan &
Ployhart, 2014). A number of researchers have recently
suggested that the use of gamification in personnel
selection, such as game-based assessments, might pre-
dict job performance beyond traditional selection
methods (e.g., Armstrong, Landers, & Collmus, 2016;
Fetzer, Mcnamara, & Geimer, 2017). Game-based assess-
ments is a new assessment method incorporating game
elements in employee selection and is lately widely
applied in personnel selection practice, raising ques-
tions about its ability to predict job performance. To
the best of our knowledge, no published empirical
research has established the effectiveness of game-
based assessments in the employee selection process.
Our study is designed to examine the potential of a
game-based assessment in predicting a number of
performance measures. Specifically, we test the
relationship between a game-based assessment and
performance criteria (e.g., perceived job performance,
Grade Point Average-GPA, perceived Organizational
Citizenship Behavior-OCB) to explore its criterion
related validity. We also explore the extent to which a
game-based assessment predicts performance beyond
traditional selection methods (personality measures
and cognitive ability).
Traditional selection tests and performance
Cognitive ability and personality tests are widely used
nowadays by organizations in an effort to predict
future work performance. Several studies and meta-
analyses support not only the validity of cognitive
ability and personality tests but also their effective
combination in predicting job performance (Schmitt,
2014). Cognitive ability tests measure the levels of gen-
eral cognitive ability or intelligence, as well as aspects
of it (e.g., numerical, verbal, abstract, and spatial
ability). Meta-analytic findings indicate that both gen-
eral cognitive ability and specific cognitive abilities
predict successfully performance and work-related
outcomes (e.g. Ones, Dilchert, & Viswesvaran, 2012).
Moreover, cognitive ability is supported to be the
single best predictor of performance at work, as well
as, of performance outcomes in the majority of job
positions and situations (Schmitt, 2014). As far as
personality is concerned, the most popular personality
Exploring the Relationship of a Gamified
Assessment with Performance
Ioannis Nikolaou, Konstantina Georgiou and Vasiliki Kotsasarlidou
Athens University of Economics and Business (Greece)
Abstract. Our study explores the validity of a game-based assessment method assessing candidates’ soft skills. Using
self-reported measures of performance, (job performance, Organizational Citizenship Behaviors (OCBs), and Great
Point Average (GPA), we examined the criterion-related and incremental validity of a game-based assessment, above
and beyond the effect of cognitive ability and personality. Our findings indicate that a game-based assessment mea-
suring soft skills (adaptability, flexibility, resilience and decision making) can predict self-reported job and academic
performance. Moreover, a game-based assessment can predict academic performance above and beyond personality
and cognitive ability tests. The effectiveness of gamification in personnel selection is discussed along with research and
practical implications introducing recruiters and HR professionals to an innovative selection technique.
Received 30 April 2018; Revised 31 October 2018; Accepted 3 November 2018
Keywords: academic performance, game-based assessments, job performance, selection methods.
Correspondence concerning this article should be addressed to
Ioannis Nikolaou. Athens University of Economics and Business.
Department of Management Science and Technology. 104 34 Athens
(Greece).
E-mail: inikol@aueb.gr
How to cite this article:
Nikolaou, I., Georgiou, K., & Kotsasarlidou, V. (2018). Exploring
the relationship of a game-based assessment with performance.
The Spanish Journal of Psychology, 21. e6. Doi:10.1017/SJP.2019.5
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
2 Nikolaou et al.
model is the five-factor model of personality (FFM)
studied extensively in diverse countries and cultures
around the world. The predictive validity of at least
two key factors of the FFM (especially conscientious-
ness but also neuroticism) has been well established
across different job positions and organizations,
whereas, meta-analytic findings (Barrick, Mount, &
Judge, 2001) have also supported the predicted valid-
ity of most personality dimensions of the FFM.
In the performance domain we often study crite-
rion measures, such as academic attainment and
OCB, apart from job performance. OCBs or extra-
role performance are defined as the voluntary and
non-mandatory employee behaviors that positively
influence organizational effectiveness and contribute
to the overall productivity of the organization (Smith,
Organ, & Near, 1983). Both emotional and cognitive
intelligence have been found to be related to organiza-
tional citizenship behaviors (e.g., Cote & Miners, 2006).
Whereas, personality traits, such as agreeableness and
conscientiousness, have been found to predict OCB as
well (e.g., Chiaburu, Oh, Berry, Li & Gardner, 2011).
Similarly, academic performance has been found to be
significantly predicted by personality and cognitive
ability. Academic performance is usually measured
with student grades or grade point average-GPA,
which is supported to predict performance at work
(Roth, BeVier, Switzer, & Schippmann, 1996). A number
of meta-analytic studies exploring the relationship
between personality and academic performance sup-
ported that agreeableness, conscientiousness and
openness to experience, as well as intelligence, predict
academic performance (Poropat, 2009; Strenze, 2007).
The relationship between cognitive ability and aca-
demic performance is also well established (Chamorro-
Premuzic & Furnham, 2008). “Academic performance has
been the criterion for validating IQ tests for over a century,
and one would hardly refer to these tests as “intelligence”
measures if they did not correlate with academic perfor-
mance” (Chamorro-Premuzic & Furnham, 2008, p. 1597).
It is worth reporting that both general cognitive ability
and specific cognitive abilities (working memory, pro-
cessing speed, spatial ability) can predict academic
performance whereas, specific cognitive abilities can
predict academic performance beyond general cogni-
tive ability (Rohde & Thompson, 2007).
To sum up, there is a large body of research which
indicates general mental ability and personality tests
as important predictors of performance. However,
traditional selection methods, such as personality tests,
predict job performance to some extent, whereas,
they are prone to faking and social desirability
(e.g., Morgeson et al., 2007; Ryan & Ployhart, 2014).
Phenomena, that the application of gamification in
employee testing might restrain increasing thus the
assessment’s predictive validity and utility in practise.
Moreover, the advent of technology has started to
render traditional selection methods obsolete, paving
the way for more technologically advanced methods
capable to reduce the cost of hiring and improve appli-
cant reactions.
Game-based assessment methods and performance
Gamification, the application of game-design ele-
ments in non-game contexts (Armstrong et al., 2016),
has recently caught the attention of researchers and
practitioners in Work/Organizational Psychology and
Human Resources Management, as a promising tool in
employee selection. Employee testing methods have
started to incorporate game elements and designs
turning into assessments that are likely to be more
fun and attractive to candidates, as well as more diffi-
cult to fake (Armstrong et al., 2016). The addition of
game elements into the assessments might render the
assessments more difficult for candidates to decode
and identify what the correct answer is, as personality
traits or intentions and behaviors are assessed indi-
rectly. For example, in a gamified Situational Judgement
Test (SJT) the clothing of the scenarios and answers
with game elements might make the desirable behav-
iors less obvious to candidates and as a result, more
difficult to distort intentionally or unintentionally
what their reactions would be in a given situation as it
is away from real life situations.
Moreover, building on the concept of “stealth assess-
ment”, Fetzer et al. (2017) highlighted the potential
of game-based assessments in predicting job perfor-
mance. Stealth assessments can accurately and effi-
ciently diagnose the level of students’ competencies
by extracting continuously performance data that are
gathered during the course of playing/learning (Shute,
Ventura, Bauer, & Zapata-Rivera, 2009). In other words,
stealth assessment is an assessment that is “seamlessly
woven into the fabric of the learning or gaming environment
so that it’s virtually invisible…reducing thus test anxiety
while not sacrificing validity and consistency” (Shute,
2015, p. 63). Along these lines, a gamified assessment
environment might distract candidates from the fact
that they are assessed, reducing test anxiety and pro-
moting behaviors that are more likely to appear uncon-
sciously instead of the desirable or socially acceptable
ones. Game engagement and the use of contexts diag-
nosing how an individual handled a given problem –
similar to work-sampling techniques - might lead to
more robust inferences about performance than tradi-
tional selection inventories that rely on self-reported
measures (Fetzer et al., 2017). Taking into consideration
all the evidence mentioned above, we aim to explore
the effectiveness of the game-based assessment method
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
Game-based Assessment in Selection 3
measuring four soft skills (i.e., resilience, adaptability,
flexibility, and decision-making) by testing whether its
dimensions are related to performance measures over
and above traditional selection measures.
A major challenge that employers nowadays face
when hiring young graduates is the lack of applicants
with the right skills and competencies (Picchi, 2016,
August 31). Among the most desirable soft skills that
employers are looking for are adaptability, flexi-
bility, decision-making, and resilience (e.g., Gray, 2016;
McKinsey & Company, 2017). Resilience, the ability to
bounce back from adversities (Luthans, 2002), might
be vital for both personal and job effectiveness with
numerous positive outcomes in work and academic
settings. For example, resilient individuals are likely to
have higher levels of job performance, job satisfaction
and organizational commitment (e.g., Avey, Reichard,
Luthans, & Mhatre, 2011), as well as, OCB (Paul, Bamel,
& Garg, 2016). Moreover, students with higher levels
of resilience are likely to demonstrate increased aca-
demic performance levels, as well as higher class
participation, enjoyment and self-esteem (Martin &
Marsh, 2006, 2008). Similarly, adaptability, the “response
or people’s adjustment to changing environmental situa-
tions” (Hamtiaux, Houssemand, & Vrignaud, 2013,
p. 130) has positive outcomes in both academic and
work contexts. For example, successful students (GPA
of 80% or more) were found to have high levels of
interpersonal, adaptability, and stress management
skills (Parker et al., 2004). Moreover, high adaptability
is related to positive relationships and behaviors in
school, such as studying, leadership, and reduced
school problems (Brackett, Rivers, Reyes, & Salovey,
2012). In the work context, adaptability is important
in performing well, handling ambiguity, and dealing
with uncertainty and stress (Kehoe, 2000). Whereas,
volunteering to help co-workers (an aspect of OCB) might
require one to adapt to changing co-worker behaviour
(Ployhart & Bliese, 2006, p. 11). Similarly to adapt-
ability, flexibility, defined as the individual’s capacity
to adapt, is likely to have positive outcomes in work,
academic and job seeking settings (Golden & Powell,
2000). Individuals with high levels of flexibility are
able to address different situations creating thus value
to organizations instead of harming them because of
their inability to adjust in changes (Bhattacharya,
Gibson, & Doty, 2005). Moreover, OCB performers are
likely to increase their flexibility in order to adjust to
the requirements of various roles and settings at work
displaying thus behaviors that contribute to organiza-
tional effectiveness (Kwan & Mao, 2011). Organizational
success, especially in changing environments, depends
also largely on effective decision-making, defined as
an intellectual process leading to a response to cir-
cumstances through the selection among alternatives
(Nelson, 1984). Employees who are capable of effective
decision-making devote effort to analyze information
to better understand a company’s threats, opportu-
nities and options, consult other people and collabo-
rate together in making decisions and act proactively
in getting the things done, enhancing thus, organiza-
tional performance (Miller & Lee, 2001). Whereas,
participation in decision-making leads to positive
outcomes within educational settings, such as OCB
(Somech, 2010).
Taking into consideration all the evidence men-
tioned above, we aim to establish the effectiveness of
the gamified selection method that we developed by
testing whether the gamified SJT dimensions are related
to performance and in particular, to performance
measures, OCB and GPA, over and above traditional
selection measures (e.g., personality tests, cognitive
ability); therefore, we state the following hypotheses.
H1: Game-based assessment dimensions will be
positive associated with participants’ job perfor-
mance scores.
H2: Game-based assessment dimensions will be
positive associated with participants’ GPA.
H3: Game-based assessment dimensions will be
positive associated with participants’ OCB.
H4: Game-based assessment dimensions will provide
incremental validity above and beyond the effect of
cognitive ability and personality in predicting par-
ticipants’ job performance scores.
H5: Game-based assessment dimensions will provide
incremental validity above and beyond the effect of
cognitive ability and personality in predicting partici-
pants’ GPA.
H6: Game-based assessment dimensions will provide
incremental validity above and beyond the effect of
cognitive ability and personality in predicting partici-
pants’ OCB.
Method
Sample & Procedure
The study was conducted in Greece during the last
months of 2017, attracting participants via the authors’
university career office, along with post-graduate and
final-year undergraduate students or recent graduates.
We contacted final-year undergraduate students, grad-
uate students or recent graduates to participate in a
survey about a selection method, as these students
were approaching graduation and were likely to search
for employment soon (e.g., van Iddekinge, Lanivich,
Roth, & Junco, 2016).
The data collection took place in two phases. In the
first phase, participants were invited to complete the
self-reported measures of cognitive ability, personality,
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
4 Nikolaou et al.
performance measures and OCB. Three to four weeks
after completion, participating individuals of the first
phase were invited to play the game-based assessment.
193 participants took part in the first phase and 120 of
them participated in the second phase, as well, a
response rate of 62%. The majority of them were
females (64%) with a mean age of 26 years. As far as
their education level is concerned, 46% of the partic-
ipants were final year undergraduates, 15% were
post-graduate students, another 15% were univer-
sity graduates and 24% had already acquired a post-
graduate degree. Most of them (55%) were currently
employed, working in entry-level (57.5%) or middle-
level positions (27.5%).
Measures
Cognitive ability. This was measured with items taken
from the International Cognitive Ability Resource
(ICAR) (2014),1. ICAR is a public-domain and open-
source tool created by Condon and Revelle (2014), aim-
ing to provide a large and dynamic bank of cognitive
ability measures for use in a wide variety of applica-
tions, including research. The test includes four item
types: Three-Dimensional Rotations, Letter and Number
Series, Matrix Reasoning, and Verbal Reasoning. We
used the 11 Matrix Reasoning items, which contain
stimuli similar to those used in Raven’s Progressive
Matrices, and which is also more closely related to
abstract reasoning. “The stimuli are 3x3 arrays of geo-
metric shapes with one of the nine shapes missing.
Participants are instructed to identify which of six
geometric shapes presented as response choices will
best complete the stimuli” (ICAR, 2014, p. 2).2 It is
worth noting that the correct answer is only one,
whereas the options “None of the above” and “Do not
know” are also available. An overall score is calcu-
lated, with high scores indicating higher levels of
cognitive ability3.
Personality. Participants completed the 50 items
International Personality Item Pool (IPIP; Goldberg
et al., 2006) to assess the Five-Factor model of
personality. Each scale consisted of 10 items. Standard
IPIP instructions were presented to participants, who
responded on a 5-point Likert-type scale ranging from
1 (inaccurate) to 5 (accurate). Research has reported
good internal consistencies for IPIP factors (see, for
example, Lim & Ployhart, 2006). In our study, reli-
ability estimates were .81 for conscientiousness, .83 for
emotional stability, .83 for extroversion, .79 for agree-
ableness, and .75 for openness to experience.
Performance measures. Overall job performance was
self-evaluated by working individuals only using a
measure used by Nikolaou and Robertson (2001). It
consists of six items where the individual has to indi-
cate whether she/he agrees or disagrees with the
behavior described in a five-point scale ranging from 1
(strongly disagree) to 5 (strongly agree). An overall job
performance score was calculated by averaging the
scores of the six items eliciting internal consistency
reliability of .91. Example items include “Achieve the
objectives of the job” and “Demonstrates expertise in all
aspects of the job”. We also asked participants to indicate
their GPA from their first degree in order to use it as an
alternative to job performance for non-working indi-
viduals. The range of the grading system in Greek
public universities is 0.00–10.00 (Excellent = 8.50–10.00,
Very Good = 6.5–8.49, Good = 5.00 –6.49, and Fail =
0.00–4.59). The GPA reported by participants was the
average grade awarded for the duration of their bach-
elor studies.
Organizational Citizenship Behavior (OCB). OCBs were
self-evaluated by working individuals only using a
measure developed by Smith et al. (1983). It consists of
16 items where the individual has to indicate whether
she/he agrees or disagrees with the behavior described
in a five-point scale ranging from 1 (strongly disagree) to
5 (strongly agree). The original scale measures two sub-
scales; altruism and generalized compliance. However,
for the purposes of the current study we only used the
overall OCB score eliciting internal consistency reli-
ability of .70. Example items include “I help other
employees with their work when they have been absent” and
I exhibit punctuality in arriving at work on time in the
morning and after lunch breaks”.
Soft skills. We used a Game-Based Assessment (GBA)
developed by Owiwi4 in order to measure the four soft
skills evaluated by the game, namely resilience, adapt-
ability, flexibility and decision-making. The four skills
are evaluated following a SJT methodology converted
into an on-line game environment, with fictional char-
acters. The Owiwi game has demonstrated satisfactory
psychometric elements and increased equivalence
with the originally developed SJT measuring the
four soft skills (Georgiou, Nikolaou, & Gouras, 2017).
Resilience is defined as “the developable capacity to
rebound or bounce back from adversity, conflict, and failure
or even positive events, progress, and increased responsi-
bility” (Luthans, 2002, p. 702), “Αdaptability is related to
change and how people deal with it; that is to say, people’s
adjustment to changing environments” (Hamtiaux et al.,
2013, p. 130). Flexibility is defined as the demonstra-
tion of “adaptable as opposed to routine behaviors; it is the
extent to which employees possess a broad repertoire of
1.http://icar-project.com/
2.https://icar-project.com/ICAR_Catalogue.pdf
3.For an example item visit https://icar-project.com/ICAR_
Catalogue.pdf 4.www.owiwi.co.uk
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
Game-based Assessment in Selection 5
behavioral scripts that can be adapted to situation-specific
demands” (Bhattacharya et al., 2005, p. 624) and finally
decision-making is defined as an intellectual process
leading to a response to circumstances through selec-
tion among alternatives (Nelson, 1984). Individualized
feedback is provided to all participants upon comple-
tion of the game.
Results
Table 1 presents the inter-correlation matrix of the
study’s variables. An interesting pattern we observe in
the inter-correlation matrix, is that the cognitive ability
measure is not associated with any of the scales mea-
sured here. Also, the self-reported job performance
measure is correlated significantly with conscientious-
ness, emotional stability and openness to experience
for the five-factor model of personality. Moreover, the
OCB measure is associated with agreeableness, simi-
larly to past research on the relationships between
agreeableness and OCB, but not with conscientious-
ness. Finally, the soft skills assessed by the game-based
assessment, which is the main focus of the current
study, are not correlated with any of the criterion mea-
sures, with the exception of the positive correlation
between GPA and decision making, rejecting thus H1
and H3 and only partially confirming H2.
Next, we proceed with the examination of our
research hypotheses. Our main focus in this study is
the suitability of the game-based assessment as a selec-
tion tool, above and beyond the well-established effect
of cognitive ability and personality, especially conscien-
tiousness. Our first three hypotheses deal with the
association between game-based assessment and the
three performance criteria. In order to explore these
hypotheses we executed three separated multiple
regression analyses for each one of the three criterion
measures. The results of these analyses are presented
in Table 2.
The results of the regression analyses show that flex-
ibility and decision-making are positively associated
with self-reported job performance and GPA respec-
tively. The block of the four skills predict 13%, 7% and
10% of the total variance in job performance, OCB and
GPA respectively. Therefore, H1 and H2 are partially
confirmed, whereas H3 is rejected. Subsequently, we
explored the incremental validity of the game-based
assessment. In order to explore H4-H6 we conducted a
number of hierarchical regression analyses, controlling
for the effect of cognitive ability and the five-factor
model of personality. The results of these analyses are
presented in Table 3.
The results of these analyses demonstrate that the
soft skills measured by the game-based assessment
do not predict additional variance in either job per-
formance or OCBs for the working individuals of
our sample, above the effect of cognitive ability and
personality rejecting thus H4 and H6. However, they
seem to have an important effect on GPA. More specif-
ically, both as a group and separately (adaptability and
decision making) demonstrate a statistical significant
relationship with GPA, above and beyond the effect of
cognitive ability and personality. These results estab-
lish the usefulness of game-based assessments in pre-
dicting educational attainment, as measured by the
GPA, both as a group and individually in the case of
adaptability and decision making.
Discussion
Our study explores the effectiveness of a game-based
assessment in employee selection. Extending previous
Table 1. Inter-Correlation Matrix of Study’s Variables (N = 63–120)
Scales Range
x
SD 1 2 3 4 5 6 7 8 9 10 11 12 13
1. Cognitive ability 11 7.69 2.33
2. Extroversion 36 34.07 7.87 –.03
3. Agreeableness 25 42.05 5.32 .08 .47**
4. Conscientiousness 35 38.42 7.10 –.05 –.16 .00
5. Emotional Stability 33 29.15 7.67 .07 .20* .14 .21*
6. Openness to experience 29 36.78 6.07 .40 .16 .20* .04 –.05
7. Resilience 58 76.35 11.85 .10 .04 .11 .14 .31 .32**
8. Flexibility 58 64.98 12.71 .05 –.03 . 11 .07 .13 –.02 .20*
9. Adaptability 81 74.57 11.60 .03 .01 .07 –.12 –.09 .08 .40** .26**
10. Decision-making 46 76.42 9.49 –.00 .05 .12 .08 .12 –.03 .23* .03 .20*
11. Job Performance 14 26.21 3.07 .13 .04 .16 .40** .26* .32** .13 .22 –.07 .13
12. OCB 38 64.77 6.78 .05 .22 .39** .14 .19 .05 –.14 –.03 –.18 .12 .26*
13. GPA 3.1 7.39 0.72 .03 –.06 –.11 .13 –.03 .00 –.02 .08 –.18 .25** –.02 .07
14. Age 25 26.36 6.21 .07 –.18* .03 .11 –.04 .10 .12 .19* .11 –.12 .22 –.07 –.04
Note: *p < .05. **p < .01. ***p < .001.
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
6 Nikolaou et al.
research on Work/Organizational Psychology and tra-
ditional selection methods, we introduce a game-based
assessment designed to measure candidates’ soft skills
(e.g., adaptability, flexibility, decision-making) that is
found to be associated with self-reported measures of
performance. Our study contributes to employee selec-
tion research, providing some support to the use of
gamification in soft skills assessments and their ability
to predict performance in work and academic settings.
For example, a game-based assessment measuring
soft skills, such as decision-making and flexibility, can
predict test-takers’ self-reported job performance and
GPA. By incorporating game elements into assess-
ments that do not use self-reported measures, but
assess behavioral intentions, test-takers’ attractive-
ness and engagement into the assessment might be
enhanced, while it might be more difficult for them to
understand what is being assessed and what the cor-
rect answer is (Armstrong et al., 2016; Fetzer et al.,
2017). As such, the use of game elements and designs
might improve the validity of assessments.
Moreover, Armstrong et al. (2016) suggested that
game-based assessments, such as gamified simula-
tions, might be employed to assess important pre-
dictor constructs like learning agility in employee
selection settings where survey methodology may
not be adequate. Along these lines, our study extends
research on traditional selection methods, exploring
the incremental validity of a game-based assessment
assessing soft skills. Game-based assessments mea-
suring soft skills, such as adaptability and decision
making, can predict academic performance (e.g., GPA),
above and beyond traditional selection methods (e.g.,
cognitive ability and personality tests). However, the
soft skills measured by the game-based assessment do
not predict additional variance in either job perfor-
mance or OCBs, above the effect of cognitive ability
and personality.
To sum up, both personality and intelligent tests
have been extensively tested in academic contexts and
their validity in predicting GPA has been established.
The emergence of internet and technology as well as
Table 2. Hierarchical Regression Analysis of the GBA on the Three Criterion Measures
Job Performance (N = 63) OCB (n = 63) GPA (N = 113)
GBAs ΒtΔR2ΔFβtΔR2ΔFβtΔR2ΔF
Resilience .14 .94 .13 2.10 –.12 –.79 .07 1.10 –.16 –1.58 .10 3.06
Flexibility .30* 2.20 .08 .58 .06 .58
Adaptability –.28 –1.85 –.18 1.14 .18 1.76
Decision making .17 1.29 .20 1.47 .25** 2.61
Note: OCB = Organizational Citizenship Behavior; GPA = Great Point Average.
*p < .05. **p < .01. ***p < .001.
Table 3. Hierarchical Regression Analysis of the GBA on the Three Criterion Measures controlling for Cognitive Ability and Personality
Job Performance (N = 63) OCB (n = 63) GPA (N = 113)
Predictors βtΔR2ΔFβtΔR2ΔFβtΔR2ΔF
Step 1
Cognitive ability .04 .30 .26 332.** .01 .10 .20 2.33* .09 .92 .04 .70
Extroversion –.05 –.30 .09 .55 .08 .77
Agreeableness .08 .53 .35* 2.27 –.20 –1.86
Conscientiousness .28* 2.07 .18 1.26 .18 1.83
Emotional Stability .06 .44 .08 .60 –.07 –.73
Openness to experience .30** 2.44 .02 .12 .02 .20
Step 2
Resilience –.02 –.15 .06 1.11 –.17 –1.05 .03 .57 –.20 –1.84 .12 3.73**
Flexibility .26 1.9 –.03 –.24 .07 .71
Adaptability –.19 –1.30 –.04 –.23 .22* 2.07
Decision making .16 1.15 .03 .19 .26s 2.73
Note: OCB = Organizational Citizenship Behavior; GPA = Great Point Average.
*p < .05 **p < .01. ***p < .001.
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
Game-based Assessment in Selection 7
the familiarity of new generations with games are
likely to reflect an increasing interest in the validity of
game-based assessments in predicting academic per-
formance beyond traditional selection methods. The
additive value of using a game-based assessment mea-
suring adaptability and decision making, both as a
group and individually, in predicting GPA beyond
personality (e.g., FFM) and cognitive ability tests (e.g.,
ICAR), has been established.
Our results are of interest to researchers and prac-
titioners of Work/Organizational Psychology interested
in the prediction of work and academic performance,
in that they support the incremental validity of a
game-based assessment over and above traditional
selection methods. They contribute to empirical
unknowns about the psychometrics properties and
effectiveness of the use of game-based assessments
in employee selection.
Game-based assessments might be used as a sup-
plement or replacement tool to traditional selection
methods as they add to the prediction of perfor-
mance of candidates or students. However, it is of high
importance to test the effectiveness of game-based assess-
ments using objective measures of performance, such
as supervisor’s ratings, and a test-retest reliability
methodology to establish further the psychometric
properties of the new assessment method. Moreover,
similar to SJTs, game-based assessments might improve
the information gathered about applicants during
the selection process as well as applicant reactions
(Armstrong et al., 2016). Gamification might increase
engagement levels which in turn might lead to reten-
tion and motivation during the process of selection
as well as better predictions about person-job fit
(e.g., Chamorro-Premuzic, Akhtar, Winsborough, &
Sherman, 2017). Using new technologies and game el-
ements in assessments, recruiters and HR professionals
might improve selection decisions making more robust
inferences about their performance as game-based
assessments do not use self-reported measures that
applicants are likely to fake (Fetzer et al., 2017).
Another reason that the use of traditional selection
methods might be reconsidered and replaced by new
game based tools is that the latter are popular among
younger generations. Organizations including game-
based assessments into the employee selection process
might provide a new technologically advanced experi-
ence to applicants sending thus signals about organi-
zational attributes (e.g., innovation) and making the
process more fun.
The present study is not without limitations. First
of all, performance outcomes were assessed via self-
report measures. Although it is suggested that objec-
tive measures are the best indicators of individual
employee performance, the unavailability of such
measurements has forced many previous studies to
use self-reported measures of performance (Pransky
et al., 2006). The use of objective measures or supervi-
sor’s report of employee’ performance would lead to
more robust findings about the predictive validity of
the game-based assessment. Also, some of the GBA’s
dimensions were not found to predict performance.
One reason might be the use of self-reported mea-
sures of performance. “It is likely that self-report and
objective measures provide information on distinct, dif-
ferent aspects of work performance. Objective measures,
even in jobs that are apparently routine and straightfor-
ward, can present challenging levels of complexity, and
may provide an estimation of only one dimension of actual
job performance.” (Pransky et al., 2006, p. 396). Future
research should explore the ability of the GBA to
predict one dimension of performance (e.g., resil-
ience or adaptability) using supervisory ratings or
objective performance data.
To establish further the effectiveness of the use of
gamification in employee selection, future research
should also explore applicants’ reactions. For example,
candidates perceive multimedia tests as more valid
and enjoyable and as a result, they are more satisfied
with the selection process while organizational attrac-
tiveness and positive behavioral intentions are
increased (Oostrom, Born, & van der Molen, 2013). The
impact of game-based assessments on perceived fair-
ness, organizational attractiveness and job pursuit
behaviors should also be investigated to support fur-
ther their suitability in the selection process. Also,
the current study does not address competence and
previous experience with technology, which might
influence test-takers’ performance. For example,
candidates who have experience with on-line games
and/or feel competent to use new technology might
have less anxiety when new technology is used (Cascio
& Montealegre, 2016), and as a result, perform better in
a game-based assessment. In general, the limited
knowledge and lack of empirical research on the use of
gamification in employee selection has made the estab-
lishment of a game-based assessment as an effective
selection method even more challenging.
Future research should also explore the role of
demographic variables on individuals’ performance in
game-based assessments. Instead of using demographic
variables simply as mere control variables in theory
testing, Spector and Brannick (2011) suggest to rethink
the use of demographics in the first place focusing on the
mechanisms that explain relations with demographics
rather than on the demographic variables that serve as
proxies for the real variables of interest.
Finally, the study might suffer from common method
variance effects, since we only used self-reported mea-
sures. In order to reduce its effect, we asked the
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
8 Nikolaou et al.
participants to complete the measures in two separate
occurrences. Moreover, the Harman’s single factor test
we conducted following the guidelines of Podsakoff,
Mackenzie, Lee, and Podsakoff (2003) discouraged the
impact of common method variance on our results.
Game-based assessments have recently appeared
in employee selection calling for further research on
their validity. Our study contributes to research on
employee selection methods by examining the crite-
rion related validity of a game-based assessment mea-
suring soft skills. Findings of our study indicate that
assessments incorporating game elements might pre-
dict self-rated job performance, and academic per-
formance, as measured by GPA. Moreover, exploring
the incremental validity of the game-based assessment
method, we provided evidence that it can predict GPA
above and beyond the effect of traditional selection
methods, such as personality and cognitive ability tests.
These results could change the way organizations and
colleges approach traditional assessment methods
making the use of gamification in work and academic
contexts more widespread in the future.
References
Armstrong M. B., Landers R. N., & Collmus A. B. (2016).
Gamifying recruitment, selection, training, and performance
management: Game-thinking in human resource
management. In D. Davis & H. Gangadharbatla (Eds.),
Emerging research and trends in gamification (pp. 140–165).
Hershey, PA: IGI Global.
Avey J. B., Reichard R. J., Luthans F., & Mhatre K. H. (2011).
Meta-analysis of the impact of positive psychological
capital on employee attitudes, behaviors, and
performance. Human Resource Development Quarterly, 22(2),
127–152. https://doi.org/10.1002/hrdq.20070
Barrick M. R., Mount M. K., & Judge T. A. (2001).
Personality and performance at the beginning of the
new millennium: What do we know and where do
we go next? International Journal of Selection and
Assessment, 9(1-2), 9–30. https://doi.org/10.1111/1468-
2389.00160
Bhattacharya M., Gibson D. E., & Doty D. H. (2005). The
effects of flexibility in employee skills, employee
behaviors, and human resource practices on firm
performance. Journal of Management, 31(4), 622–640.
https://doi.org/10.1177/0149206304272347
Brackett M. A., Rivers S. E., Reyes M. R., & Salovey P.
(2012). Enhancing academic performance and social and
emotional competence with the RULER feeling words
curriculum. Learning and Individual Differences, 22(2),
218–224. https://doi.org/10.1016/j.lindif.2010.
10.002
Cascio W. F., & Montealegre R. (2016). How technology is
changing work and organizations. Annual Review of
Organizational Psychology and Organizational Behavior, 3(1),
349–375. https://doi.org/10.1146/annurev-orgpsych-
041015-062352
Chamorro-Premuzic T., Akhtar R., Winsborough D., &
Sherman R. A. (2017). The datafication of talent:
How technology is advancing the science of human
potential at work. Current Opinion in Behavioral Sciences,
18, 13–16. https://doi.org/10.1016/j.cobeha.2017.
04.007
Chamorro-Premuzic T., & Furnham A. (2008). Personality,
intelligence and approaches to learning as predictors of
academic performance. Personality and Individual
Differences, 44(7), 1596–1603. https://doi.org/10.1016/
j.paid.2008.01.003
Chiabur D. S., Oh I.-S., Berry C. M., Li N., & Gardner R. G.
(2011). The five-factor model of personality traits and
organizational citizenship behaviors: A meta-analysis.
Journal of Applied Psychology, 96, 1140–1166. https://doi.
org/10.1037/a0024004
Condon D. M., & Revelle W. (2014). The international
cognitive ability resource: Development and initial
validation of a public-domain measure. Intelligence,
43, 52–64. https://doi.org/10.1016/j.intell.2014.
01.004
Côte S., & Miners C. T. H. (2006). Emotional intelligence,
cognitive intelligence, and job performance. Administrative
Science Quarterly, 51(1), 1–28. https://doi.org/10.2189/
asqu.51.1.1
Fetzer M., McNamara J., & Geimer J. L. (2017).
Gamification, serious games and personnel selection. In
H. W. Goldstein, E. D. Pulakos, J. Passmore, & C. Semedo
(Eds.), The Wiley Blackwell handbook of the psychology of
recruitment, selection and employee retention (pp. 293–309).
West Sussex, UK: John Wiley & Sons Ltd.
Georgiou K., Nikolaou I., & Gouras A. (2017). Serious
gaming in employees’ selection process. In I. Nikolaou,
Alliance for Organizational Psychology Invited
Symposium-The impact of technology on recruitment and
selection: An international perspective. Paper presented at
the 32nd Annual Conference of the Society for Industrial and
Organizational Psychology, Orlando, USA.
Goldberg L. R., Johnson J. A., Eber H. W., Hogan R.,
Ashton M. C., Cloninger C. R., & Gough H. G. (2006).
The international personality item pool and the future of
public-domain personality measures. Journal of Research in
Personality, 40(1), 84–96. https://doi.org/10.1016/j.jrp.
2005.08.007
Golden W., & Powell P. (2000). Towards a definition
of flexibility: In search of the Holy Grail? Omega, 28(4),
373–384. https://doi.org/10.1016/S0305-0483(99)
00057-2
Gray A., (2016). The 10 skills you need to thrive in the Fourth
Industrial Revolution. Retrieved from The World Economic
Forum website: https://www.weforum.org/agenda/
2016/01/the-10-skills-you-need-to-thrive-in-the-fourth-
industrial-revolution
Hamtiaux A., Houssemand C., & Vrignaud P. (2013).
Individual and career adaptability: Comparing models
and measures. Journal of Vocational Behavior, 83(2), 130–141.
https://doi.org/10.1016/j.jvb.2013.03.006
International Cognitive Ability Resource (ICAR) (2014).
[Public-domain assessment tool]. Retrieved from https://
icar-project.com/
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
Game-based Assessment in Selection 9
Kehoe J. F. (2000). Managing selection in changing
organizations: Human resource strategies. San Francisco,
CA: Jossey-Bass Publ.
Kwan H.-K., & Mao Y. (2011). The role of citizenship
behavior in personal learning and work–family
enrichment. Frontiers of Business Research in China, 5(1),
96–120. https://doi.org/10.1007/s11782-011-0123-6
Lim B.-C., & Ployhart R. E. (2006). Assessing the convergent
and discriminant validity of Goldberg’s international
personality item pool: A multitrait-multimethod
examination. Organizational Research Methods, 9(1), 29–54.
https://doi.org/10.1177/1094428105283193
Luthans F. (2002). The need for and meaning of positive
organizational behavior. Journal of Organizational Behavior,
23(6), 695–706. https://doi.org/10.1002/job.165
Martin A. J., & Marsh H. W. (2006). Academic resilience and
its psychological and educational correlates: A construct
validity approach. Psychology in the Schools, 43(3), 267–281.
https://doi.org/10.1002/pits.20149
Martin A. J., & Marsh H. W. (2008). Academic buoyancy:
Towards an understanding of students’ everyday
academic resilience. Journal of School Psychology, 46(1),
53–83. https://doi.org/10.1016/j.jsp.2007.01.002
McKinsey & Company (Producer). (2017). The digital
future of work: What skills will be needed? [Video].
Available from https://www.youtube.com/watch?v=
UV46n44jnoA
Miller D., & Lee J. (2001). The people make the process:
Commitment to employees, decision making, and
performance. Journal of Management, 27(2), 163–189.
https://doi.org/10.1177%2F014920630102700203
Morgeson F. P., Campion M. A., Dipboye R. L.,
Hollenbeck J. R., Murphy K., & Schmitt N. (2007).
Are we getting fooled again? Coming to terms with
limitations in the use of personality tests for personnel
selection. Personnel Psychology, 60(4), 1029–1049.
https://doi.org/10.1111/j.1744-6570.2007.00100.x
Nelson G. D. (1984). Assessment of health decision making
skills of adolescents. Retrieved from ERIC database
(ED252774).
Nikolaou I., & Robertson I. T., IV. (2001). The Five-Factor
model of personality and work behavior in Greece.
European Journal of Work and Organizational Psychology, 10(2),
161–186. https://doi.org/10.1080/13594320143000618
Ones D. S., Dilchert S., & Viswesvaran C. (2012). Cognitive
abilities. In N. Schmitt (Ed.), The Oxford handbook of
personnel assessment and selection (pp. 179–224). New York,
NY: Oxford University Press.
Oostrom J. K., Born M. P., & van der Molen H. T.
(2013). Webcam tests in personnel selection. In D. Derks &
A. Bakker (Eds.), The psychology of digital media at work
(pp. 166–180). USA & Canada: Psychology Press.
Parker J. D. A., Creque R. E., Barnhart D. L., Harris J. I.,
Majeski S. A., Wood L. M., ... Hogan M. J. (2004).
Academic achievement in high school: Does emotional
intelligence matter? Personality and Individual Differences,
37(7), 1321–1330. https://doi.org/10.1016/j.paid.
2004.01.002
Paul H., Bamel U. K., & Garg P. (2016). Employee
resilience and OCB: Mediating effects of organizational
commitment. Vikalpa, 41(4), 308–324. https://doi.
org/10.1177/0256090916672765
Picchi A. (2016, August 31). Do you have the “soft skills”
employers badly need? CBSNews.com. Retrieved from
https://www.cbsnews.com/news/do-you-have-the-soft-
skills-employers-badly-need/
Ployhart R. E., & Bliese P. D. (2006). Individual adaptability
(I–ADAPT) theory: Conceptualizing the antecedents,
consequences, and measurement of individual differences
in adaptability. In C. Shawn Burke, Linda G. Pierce, &
Eduardo Salas (Eds.) Advances in human performance and
cognitive engineering research (Vol. 6, 3–39). Bingley, UK:
Emerald Group Publishing Limited.
Podsakoff P. M., MacKenzie S. B., Lee J.-Y., & Podsakoff
N. P. (2003). Common method biases in behavioral research:
A critical review of the literature and recommended
remedies. Journal of Applied Psychology, 88(5), 879–903.
https://doi.org/10.1037/0021-9010.88.5.879
Poropat A. E. (2009). A meta-analysis of the five-factor model
of personality and academic performance. Psychological
Bulletin, 135(2), 322–338. https://doi.org/10.1037/a0014996
Pransky G., Finkelstein S., Berndt E., Kyle M., Mackell J., &
Tortorice D. (2006). Objective and self-report work
performance measures: A comparative analysis.
International Journal of Productivity and Performance
Management, 55(5), 390–399. https://doi.org/10.1108/
17410400610671426
Rohde T. E., & Thompson L. A. (2007). Predicting
academic achievement with cognitive ability.
Intelligence, 35(1), 83–92. https://doi.org/10.1016/
j.intell.2006.05.004
Roth P. L., BeVier C. A., Switzer F. S., III., & Schippmann J. S.
(1996). Meta-analyzing the relationship between grades and
job performance. Journal of Applied Psychology, 81(5), 548–556.
https://doi.org/10.1037/0021-9010.81.5.548
Ryan A. M., & Ployhart R. E. (2014). A century of selection.
Annual Review of Psychology, 65(1), 693–717. https://doi.
org/10.1146/annurev-psych-010213-115134
Schmitt N. (2014). Personality and cognitive ability as
predictors of effective performance at work. Annual Review
of Organizational Psychology and Organizational Behavior,
1(1), 45–65. https://doi.org/10.1146/annurev-orgpsych-
031413-091255
Shute V. (2015, August). Stealth assessment in video
games. Paper presented at the Australian Council for
Educational Research Research “Conference Learning
Assessments: Designing the future conference”. Melbourne,
Australia.
Shute V. J., Ventura M., Bauer M., & Zapata-Rivera D.
(2009). Melding the power of serious games and
embedded assessment to monitor and foster learning. In
U. Ritterfeld, M. Cody, & P. Vordered (Eds.), Serious games:
Mechanisms and effects (pp. 295–321). New York and
London: Routledge, Taylor & Francis.
Smith C. A., Organ D. W., & Near J. P. (1983).
Organizational citizenship behavior: Its nature and
antecedents. Journal of Applied Psychology, 68(4), 653–663.
https://psycnet.apa.org/doi/10.1037/0021-9010.68.4.653
Somech A. (2010). Participative decision making
in schools: A mediating-moderating analytical
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
10 Nikolaou et al.
framework for understanding school and teacher
outcomes. Educational Administration Quarterly,
46(2), 174–209. https://doi.org/10.1177/
1094670510361745
Spector P. E., & Brannick M. T. (2011). Methodological
urban legends: The misuse of statistical control
variables. Organizational Research Methods, 14(2),
287–305. https://doi.org/10.1177/1094428110369842
Strenze T. (2007). Intelligence and socioeconomic success:
A meta-analytic review of longitudinal research. Intelligence,
35(5), 401–426. https://doi.org/10.1016/j.intell.2006.09.004
van Iddekinge C. H., Lanivich S. E., Roth P. L., & Junco E.
(2016). Social media for selection? Validity and adverse
impact potential of a Facebook-based assessment. Journal
of Management, 42(7), 1811–1835. https://doi.org/
10.1177/0149206313515524
https://www.cambridge.org/core/terms. https://doi.org/10.1017/sjp.2019.5
Downloaded from https://www.cambridge.org/core. Athens University of Economics and Business, on 01 Mar 2019 at 14:08:41, subject to the Cambridge Core terms of use, available at
... Thus, it may be considered a "hard" version of gamified assessment, as opposed to the aforementioned gamified assessments. An example of gamefully designed assessment is Owiwi [14], a situational-judgment test in which the item content can only be understood through the game's narrative. ...
... If gamified assessments are considered for personnel selection, they must demonstrate their ability to predict criteria of interest to the organization, namely one or more dimensions of job performance. Other types of GRAs have been shown to have relationships with task performance or contextual performance [14,25,26]. ...
... However, the present study extends to the main dimensions of job performance. Furthermore, although other GRAs such as gamefully designed assessments and game-based assessments have found support for task performance and contextual performance [14,26], to the best of our knowledge this is the first study that investigates the use of gamified assessments (or any type of GRA) in regression models with counterproductive work behaviors as criterion. As mentioned, counterproductive work behaviors are a serious problem in modern organizations [45], so being able to identify using GRAs which characteristics of applicants' are related to display these behaviors is always relevant. ...
Article
Full-text available
Personality questionnaires stand as crucial instruments in personnel selection but their limitations turn the interest towards alternatives like game-related assessments (GRAs). GRAs developed for goals other than fun are called serious games. Within them, gamified assessments are serious games that share similarities with traditional assessments (questionnaires, situational judgment tests, etc.) but they incorporate game elements like story, music, and game dynamics. This paper aims to contribute to the research on serious games as an alternative to traditional personality questionnaires by analyzing the characteristics of a gamified assessment called VASSIP. This gamified assessment, based on an existing measure of the Big Five personality traits, incorporates game elements such as storyfication, immersion, and non-evaluable gamified dynamics. The study performed included 98 university students (77.6% with job experience) as participants. They completed the original personality measure (BFI-2-S), the gamified evaluation of personality (VASSIP), a self-report measure of the main dimensions of job performance (task performance, contextual performance, and counterproductive work behaviors), and measures of applicant reactions to BFI-2-S and VASSIP. Results showed that the gamified assessment behaved similarly to the original personality measure in terms of reliability and participants’ scores, although the scores in Conscientiousness were substantially higher in VASSIP. Focusing on self-reports of the three dimensions of job performance, regression models showed that the gamified assessment could explain all of them. Regarding applicant reactions, the gamified assessment obtained higher scores in perceptions of comfort, predictive validity, and attractiveness, although the effect size was small except for the latter. Finally, all applicant reactions except for attractiveness were related to age and personality traits. In conclusion, gamified assessments have the potential to be an alternative to traditional personality questionnaires but VASSIP needs more research before its application in actual selection processes.
... For example, people find video games engaging and tend to be more motivated to complete a game-based assessment compared with people performing a paper-and-pencil test [28]. Game-based assessments also ensure fidelity because all participants are interacting with a predefined game, thereby reducing the likelihood of introducing bias by the person scoring their performance [29]. ...
Article
Full-text available
Patriotism is a topic of significant importance in many countries around the world. Preschool children play a crucial role in shaping the future, and their patriotism is closely linked to the future development of their nation. Currently, the game-based assessment has advantages over traditional evaluation methods and is more suitable for preschool children. This study employed a game-based assessment method to investigate patriotism among preschool children aged 3 to 6 years in China. The results indicated that their levels of patriotism were above average and tended to increase with age. Preschool children in the capital region scored higher on national cognitive mastery. However, preschool children’s understanding of patriotism remains somewhat vague, and they often articulate and express their patriotism through concrete examples. Four types of patriotism among preschool children were identified: high-level patriotism, cognitive-based patriotism, emotional-based patriotism, and low-level patriotism. The findings of this study contribute to a more comprehensive understanding of patriotism in preschool children and provide an evidence-based reference for the development of patriotic education.
... Jones, 1956) that may be influenced by employee performance but also other factors (e.g., tenure, occupation). We excluded selfratings of performance (e.g., Nikolaou et al., 2019) given the concerns about self-serving bias and the norm to use supervisors or peers to evaluate performance. We also excluded job knowledge tests and work samples as measures of job performance (e.g., Russell et al., 2017) because they are more typically considered measures of job-relevant knowledge and skills than measures of job performance. ...
Article
Full-text available
Many organizations assess job applicants’ academic performance (AP) when making selection decisions. However, researchers and practitioners recently have suggested that AP is not as relevant to work behavior as it used to be due to factors such as grade inflation and increased differences between academic and work contexts. The present meta-analysis examines whether, and under what conditions, AP is a useful predictor of work behavior. Mean correlations (corrected for error in the criterion) between AP and outcomes were .21 for job performance (k = 114), .34 for training performance (k = 8), and −.02 for turnover (k = 20). There was considerable heterogeneity in validity estimates for job performance (80% credibility interval [.04, .37]). Moderator analyses revealed that AP is a better predictor of performance (a) for AP measures that are more relevant to students’ future jobs, (b) for professor ratings of AP than for grades and class rank, (c) for samples that include applicants from the same university or from the same major, and (d) for official records of AP than for applicant self-reports. Job relevance was the strongest and most consistent moderator with operational validities in the .30s and .40s for measures that assessed AP in major-specific courses or courses in which students are evaluated on behaviors relevant to their future jobs (e.g., practicum classes). Overall, researchers and organizations should carefully consider whether and how AP is relevant to particular jobs and outcomes, as well as use designs and measures that optimize the predictive value of AP.
... Y es aquí (validez y sinceridad) donde se están centrando actualmente los esfuerzos de los investigadores en Psicología del Trabajo y de las Organizaciones para diseñar nuevas técnicas. Las tres líneas más destacadas se centran en el desarrollo de escalas de elección forzosa para los test de personalidad (Salgado & Lado, 2018), la aplicación de la Neurociencia con aportaciones muy prometedoras como la Teoría de la Agudeza General que comentaremos en el siguiente apartado (Leeds, 2020) y la Gamificación (Nikolaou, Georgiou & Kotsasarlidou, 2019). ...
Book
Full-text available
El contenido de este manual está orientado a la práctica profesional, pero teniendo en cuenta en todo momento la evidencia proporcionada por el marco académico, proponiendo de este modo un modelo de profesionalidad que integra teoría y práctica. Tanto estudiantes, especialmente de postgrado, como profesionales y técnicos pueden encontrar aquí fácilmente recursos para afrontar con solvencia cualquier situación que necesiten gestionar relacionada con la definición de puestos de trabajo, la atracción y la evaluación del talento en las organizaciones, gracias al enfoque aplicado, didáctico y actualizado de este manual.
... This suggests that extrinsic rewards such as the award of points influence segments of the user population to participate more intensely when a point system is implemented (Dung et al., 2020;Lucassen & Jansen, 2014;Toda et al., 2018). In contrast, another study exhibited different results, indicating that gamification can inhibit intrinsic motivation as reward provision is perceived as controlling, leading to feelings of helplessness, incompetence and a decline in intrinsic motivation (Hanus & Fox, 2015;Nikolaou et al., 2019). ...
Article
Full-text available
Purpose: The purpose of this study is to evaluate, investigate and assess the impact of gamification on the performance of online transportation drivers via social values, motivation and participatory engagement.Design/methodology/approach: This quantitative study is based on primary data from 110 online transportation drivers in five cities in the Central Java Province region: Semarang, Pekalongan, Kudus, Purwokerto and Solo. Partial least squares (PLS) method is used to analyse and evaluate data.Findings/results: The research results show that job gamification positively and significantly influences driver performance through social value, motivation and participatory engagement.Practical implications: The findings can be applied to increase employee performance in a business or organisation that supports a sustainable, friendly company. They also offer practical basics to make decisions in increased employee engagement, enhanced productivity, improved learning and skill development, social value and collaboration.Originality/value: While the study establishes a positive relationship between gamification and driver performance through the mediating factors of social value, motivation and participatory engagement, future research could delve deeper into understanding the specific gamification techniques. Those design elements are also the most effective in the context of online transportation. Additionally, exploring potential moderating factors, such as the demographics of drivers or market conditions, could provide a more nuanced understanding of the gamification-performance relationship. This in-depth exploration could help transportation companies tailor their gamification strategies for maximum impact and address potential limitations in current research.
... Neben dem Ziel, effektive Lernerfahrungen zu ermöglichen, ist die Bewertung von Lernprozessen durch gamifizierte und spielbasierte Mittel in den letzten Jahren intensiviert worden (Bezzina, 2015;Kato & de Klerk, 2017;Nikolaou et al., 2019). Dies kann entweder durch das Spielen von Spielen oder gamifizierte Ansätze erreicht werden, die nur zum Zweck der Simulation und/oder Bewertung genutzt werden, oder durch die Verwendung desselben Spiels/gamifizierten Ansatzes, der vor der Bewertung zur spielerischen Bildung genutzt wurde. ...
Chapter
Full-text available
Das Abstract wurde für Researchgate seperat erstellt: Dieses Kapitel beleuchtet den innovativen Einsatz von Blockchain-Technologien in der schulischen Bildung, insbesondere im Kontext von Serious Games und Gamification. Es diskutiert, wie Blockchain die Sicherheit und Integrität digitaler Lernumgebungen und Bewertungstools verstärken kann, indem es Datenmanipulation verhindert und eine faire, transparente Abbildung von Lernleistungen ermöglicht. Durch die dezentrale und unveränderliche Natur der Blockchain wird die Möglichkeit geschaffen, Bildungscredentials über Grenzen hinweg zu sichern und anzuerkennen. Es werden zudem Good-Practice-Beispiele vorgestellt, wie das Serious Game „Gallery Defender“, das die praktische Anwendung dieser Technologie im Bildungssektor illustriert. Das Kapitel schließt mit einem Ausblick auf die Herausforderungen und die notwendige Weiterentwicklung, um Blockchain breiter in E-Learning-Anwendungen zu integrieren und skizziert die Relevanz dieser Technologie für eine zukunftssichere Bildungslandschaft .
Article
Full-text available
Aim. The purpose of the paper is to study the concept of gamification in the field of psychology as a basis for the formation of the results of bibliometric analysis and to identify ways to improve the efficiency of its work. The article is devoted to the bibliometric analysis of the works presented in the Scopus scientometric database and covering the areas of psychology and psycholinguistics in gamification. Methods. The dialectical approach allowed us to formulate philosophical aspects, factors and conditions of gamification in psychology in different areas of activity. It was found that there are practically there are practically no studies describing the use of gamification in psycholinguistics. For the bibliometric analysis, we used The online platform VOSviewer was used to process and summarize the data on gamification in the field of psychology, which are presented in the scientometric database Scopus. Results. The analysis shows the relevance of the chosen research topic, as every year there is a positive trend in the number of published papers: 2012 (n=1), 2013 (n=5), 2014 (n=10), and in 2023 (n=139). The formation of a visualization map by keyword allowed us to identify 7 clusters where the concept of gamification in the field of psychology is most often revealed. By affiliation, most of the papers were published in institutions located in the United States. The bibliometric analysis allowed us to select the TOP-5 authors who currently have the largest number of works on gamification in the field of psychology. Conclusions. It has been proved that bibliometric analysis is an effective tool for conducting a generalized study of published works on a given keyword. For the first time, a bibliometric analysis of scientific papers on the topic of gamification in the field of psychology (n=718), which are presented in the Scopus scientometric database, was carried out.
Conference Paper
Full-text available
We investigate the suitability of a microworld simulation of complex problem-solving (CPS), a type of game-based assessment that mimics the dynamics of decision-making in the real world through complex situations, as a tool for personnel selection. After completing either Democracy (D3; i.e., a political game) or Ecopolicy (i.e., a simulation of society's management), a sample of 48 participants completed a user acceptance questionnaire. CPS performance was very low for both D3 and Ecopolicy; nonetheless, the distribution of scores could provide a discriminating evaluation of CPS.
Article
Full-text available
Gamification has expanded dramatically in the field of human resource management in recent years. Research that has examined the influence of gamification on changes in candidates' perceptions of personnel selection is still expanding, as the majority of academic work in gamification is related to the field of education. This paper aims to investigate how job seekers perceive selection assessment tools based on gamification through a systematic review. The systematic review contains articles published between 2010 and 2022 and indexed in five databases. In the first stage, 5260 articles are included in our search. We chose as exclusion criteria articles published before 2010, references other than journal articles, reviews, and conference articles. Furthermore, we excluded articles published in languages other than English and French, off-topic articles. After assessing all the references, we selected 21 articles following PRISMA statement. The results highlighted 21 scientific articles covering the following themes: candidates' reactions to gamification tools and serious games, candidates' reactions to gamified selection tests, advantages and limits of gamification, and the performance level of candidates regarding the use of gamification. Limitations and implications for future research were discussed.
Article
Full-text available
Executive Summary With increased popularity of positive psychology, there is a greater emphasis on exploring positive human resource strengths to address the workplace challenges and augment organizational performance. Previous research suggests that resilience positively relates to desired employee attitudes, behaviours, and performance such as organizational citizenship behaviour (OCB). However, it would be intriguing to understand the underlying mechanism of resilience-OCB relationship. Towards this, the study examines the mediating role of organizational commitment. In the light of identified research gaps, the study explores the mechanism of the relationship between resilience and OCB in the context of Indian organizations. The study sample comprised employees (N = 345) working in the manufacturing industries of Uttarakhand and Himachal Pradesh in India. Data were collected with the help of self-administered questionnaires through systematic random sampling. A model was developed and tested in which the effects of resilience on OCB were hypothesized to be mediated by organizational commitment. The hypotheses testing was done using hierarchical multiple regression and for testing the mediating effects, bootstrapping in SPSS was used. The results provide empirical evidence for the positive relationship between resilience and OCB. Also, the results emphasize that resilience influence organizational commitment as well. As hypothesized, results have also supported the mediating effect of organizational commitment in the relationship between resilience and OCB, explaining the underlying mechanism of resilience-OCB relationship. The mediation is partial which means that resilience influences OCB directly as well as indirectly through organizational commitment. The study offers significant advancements for both resilience and OCB research. The results also offer direction to organizations which desire to stimulate and maintain employee outcomes in their organizations for competitive advantage. Employee outcomes in organizations can be improved by developing resilience among its employees. Implications of promoting resilience at workplace are discussed.
Article
Full-text available
Given the rapid advances and the increased reliance on technology, the question of how it is changing work and employment is highly salient for scholars of organizational psychology and organizational behavior (OP/OB). This article attempts to interpret the progress, direction, and purpose of current research on the effects of technology on work and organizations. After a review of key breakthroughs in the evolution of technology, we consider the disruptive effects of emerging information and communication technologies. We then examine numbers and types of jobs affected by developments in technology, and how this will lead to significant worker dislocation. To illustrate technology's impact on work, work systems, and organizations, we present four popular technologies: electronic monitoring systems, robots, teleconferencing, and wearable computing devices. To provide insights regarding what we know about the effects of technology for OP/OB scholars, we consider the results of research conducted from four different perspectives on the role of technology in management. We also examine how that role is changing in the emerging world of technology. We conclude by considering approaches to six human resources (HR) areas supported by traditional and emerging technologies, identifying related research questions that should have profound implications both for research and for practice, and providing guidance for future research.
Article
Full-text available
Academic buoyancy is developed as a construct reflecting everyday academic resilience within a positive psychology context and is defined as students' ability to successfully deal with academic setbacks and challenges that are typical of the ordinary course of school life (e.g., poor grades, competing deadlines, exam pressure, difficult schoolwork). Data were collected from 598 students in Years 8 and 10 at five Australian high schools. Half-way through the school year and then again at the end of the year, students were asked to rate their academic buoyancy as well as a set of hypothesized predictors (self-efficacy, control, academic engagement, anxiety, teacher-student relationship) in the area of mathematics. Multilevel modeling found that the bulk of variance in academic buoyancy was explained at the student level. Confirmatory factor analysis and structural equation modeling showed that (a) Time 1 anxiety (negatively), self-efficacy, and academic engagement significantly predict Time 1 academic buoyancy; (b) Time 2 anxiety (negatively), self-efficacy, academic engagement, and teacher-student relationships explain variance in Time 2 academic buoyancy over and above that explained by academic buoyancy at Time 1; and (c) of the significant predictors, anxiety explains the bulk of variance in academic buoyancy.
Chapter
Full-text available
Game-thinking is beginning to appear in a wide variety of non-game contexts, including organizational support settings like human resource management (HRM). The purpose of this chapter is two-fold: 1) to explore the opportunities for game-thinking via gamification and serious games in HRM based on current and previous HRM literature and 2) to identify future research areas at the intersection of game-thinking and HRM. Prevailing HRM theories will be applied to the use of game-thinking in different sub-fields of HRM, including recruitment, selection, training, and performance management.
Article
Full-text available
Recent reports suggest that an increasing number of organizations are using information from social media platforms such as Facebook.com to screen job applicants. Unfortunately, empirical research concerning the potential implications of this practice is extremely limited. We address the use of social media for selection by examining how recruiter ratings of Facebook profiles fare with respect to two important criteria on which selection procedures are evaluated: criterion-related validity and subgroup differences (which can lead to adverse impact). We captured Facebook profiles of college students who were applying for full-time jobs, and recruiters from various organizations reviewed the profiles and provided evaluations. We then followed up with applicants in their new jobs. Recruiter ratings of applicants’ Facebook information were unrelated to supervisor ratings of job performance (rs = −.13 to –.04), turnover intentions (rs = −.05 to .00), and actual turnover (rs = −.01 to .01). In addition, Facebook ratings did not contribute to the prediction of these criteria beyond more traditional predictors, including cognitive ability, self-efficacy, and personality. Furthermore, there was evidence of subgroup difference in Facebook ratings that tended to favor female and White applicants. The overall results suggest that organizations should be very cautious about using social media information such as Facebook to assess job applicants.
Chapter
This chapter considers definitions and the boundaries between gamification and serious games. It then primarily concentrates on serious games, recognizing that this covers gamification in general and the common challenges and potential benefits. The chapter also covers current uses of serious games. Serious games may have a marked impact on the field of personnel selection. The chapter further discusses the rationale for using gaming techniques for personnel selection and offer practical guidelines for leveraging this methodology in a selection context. Developing and implementing serious games for personnel selection requires adherence to the same psychometric and legal considerations as any other selection tool, but there are some unique aspects that also need to be considered. These are grouped into the following categories: objectives, design and utilization. Each aspect is discussed in turn. Finally, future directions in research and application of gamification and serious games are discussed.
Article
This article reviews three innovations that not only have the potential to revolutionize the way organizations identify, develop and engage talent, but are also emerging as tools used by practitioners and firms. Specifically, we discuss (a) machine-learning algorithms that can evaluate digital footprints, (b) social sensing technology that can automatically decode verbal and nonverbal behavior to infer personality and emotional states, and (c) gamified assessment tools that focus on enhancing the user-experience in personnel selection. The strengths and limitations of each of these approaches are discussed, and practical and theoretical implications are considered.
Article
This chapter describes measures of cognitive ability (general mental ability and specific abilities) and examines their usefulness for personnel selection. An overview of definitional and theoretical issues as they apply to use of such measures in personnel decision making is provided first. Then, issues of reliability of measures are discussed, again with particular emphasis on implications for personnel selection (e.g., impact on rank order of candidates when using different measures). Next, validities of cognitive ability tests are summarized for the following criteria: overall job performance, task performance, contextual performance, counterproductive work behaviors, leadership, creativity and innovation, voluntary turnover, job satisfaction, and career success. The authors address the nature of predictor-criterion relationships (e.g., usefulness of general versus specific abilities, criterion dynamicity, assumption of linearity) by discussing both recent large-scale evidence in normal samples and among the highly gifted. Finally, the extent to which cognitive ability is captured in tools other than standardized tests is summarized, enabling an evaluation of other selection assessments as substitutes and/or supplements to standardized cognitive ability tests.
Chapter
Work organizations and the employees within these organizations face considerable environmental pressures requiring adaptive change. Several forces have contributed to this need for great adaptation. These are described in many excellent sources (e.g., Cascio, 2003; Ilgen & Pulakos, 1999); here we briefly review their implications for individual adaptability.
Article
Conclusions about the validity of cognitive ability and personality measures based on meta-analyses published mostly in the past decade are reviewed at the beginning of this article. Research on major issues in selection that affect the use and interpretation of validation data are then discussed. These major issues include the dimensionality of personality, the nature and magnitude of g in cognitive ability measures, conceptualizations of validity, the nature of the job performance domain, trade-offs between diversity and validity, reactions to selection procedures, faking on personality measures, mediator and moderator research on test–performance relationships, multilevel issues, Web-based testing, the situational framing of test stimuli, and the context in which selection occurs.