ArticlePDF Available

Abstract

By examining the overall success of managerial training concerning Kirkpatrick's training effectiveness paradigm, this work seeks to contribute to the substantial contributions of prior 40-year research in the area. Additionally, this study seeks to assess the overall findings regarding its renowned levels, reaction, learning, behavior, and results of Kirkpatrick's training model and associations among these levels. Through a meta-analytic process, this study statistically extends and unifies the management training literature. The Kirkpatrick model was the subject of a meta-analysis that covered 41 papers (n=41) between 1982 and 2021. Although accommodating literary study regarding Kirkpatrick's four levels of the training assessing model recommended positive association among its distinct levels, the results do not indicate a significant development in the usefulness of managerial training from 1982 through 2021. The implications have a direct bearing on the choice of evaluation techniques for upcoming research on the effectiveness of management training programs. The academic world and practitioners both value this implication. The potential exclusion of prior research and the variety of assessment techniques employed in earlier studies-beyond the simple categories of objective and subjective assessment-are among the study's limitations. The fact that this study spans a significant amount of time is its key contribution. The approach thus provides a wider perspective on managerial training throughout time.
35
Kirkpatrick Model and Training Effectiveness:
A Meta-Analysis 1982 To 2021
Fahad Nawaz1, Wisal Ahmed2, Muhammad Khushnood3
Abstract
By examining the overall success of managerial training concerning Kirkpatrick’s training
effectiveness paradigm, this work seeks to contribute to the substantial contributions of prior
40-year research in the area. Additionally, this study seeks to assess the overall findings regarding
its renowned levels, reaction, learning, behavior, and results of Kirkpatrick’s training model and
associations among these levels. Through a meta-analytic process, this study statistically extends
and unifies the management training literature. The Kirkpatrick model was the subject of a
meta-analysis that covered 41 papers (n=41) between 1982 and 2021. Although accommodating
literary study regarding Kirkpatrick’s four levels of the training assessing model recommended
positive association among its distinct levels, the results do not indicate a significant devel-
opment in the usefulness of managerial training from 1982 through 2021. The implications
have a direct bearing on the choice of evaluation techniques for upcoming research on the
effectiveness of management training programs. The academic world and practitioners both
value this implication. The potential exclusion of prior research and the variety of assessment
techniques employed in earlier studies—beyond the simple categories of objective and subjective
assessment—are among the study’s limitations. The fact that this study spans a significant
amount of time is its key contribution. The approach thus provides a wider perspective on
managerial training throughout time.
Keywords: Kirkpatrick Model, Meta-Analysis, Reaction, Learning, Behavior, Results
1. Introduction
Human resource development efforts concentrate on skill and knowledge enhance-
ment of their workforce. The training and individual capacity building activities im-
prove the workers’ innate abilities, knowledge, and performance outcomes (Dachner,
1 PhD Scholar, Institute of Business Studies, Kohat University of Science & Technology.
Email: fahad.nawaz1@gmail.com
2 Professor, Institute of Business Studies, Kohat University of Science & Technology.
Email: dr.wisal@kust.edu.pk
3 Associate Professor, Institute of Business Studies, Kohat University of Science & Technology.
Email: mkhushnood@kust.edu.pk
Business & Economic Review: Vol. 14, No.2 2022 pp. 35-56
DOI: dx.doi.org/10.22547/BER/14.2.2
This work is licensed under a Creative Commons Attribution 4.0 International (CC-BY)
ARTICLE HISTORY
07 May, 2022 Submission Received 16 Jun, 2022 First Review
02 Aug, 2022 Second Review 05 Oct, 2022 Accepted
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
36
Ellingson, Noe & Saxton, 2021). Therefore, without having well trained employees
the organization will not be able to accomplish organizational goals (Samwel, 2018).
Consequently, the companies spend thousands of dollars on training activities (Patki,
Sankhe, Jawwad & Mulla, 2021). According to the United States Industrial training
report (2020), global firms invest $696.7 billion on training activities. Similarly, Asian
countries are also spending a significant sum on training and education technical
education. The existing training evaluating models are assessing the training effec-
tiveness without an appropriate mechanism (Hazan-Liran, & Miller, 2020; Velada
& Caetano, 2007). Kirkpatrick introduced a training evaluation model in 1960.
According to Cahapay (2021), the Kirkpatrick’s paradigm was designed through an
effective and productive technique to assess learning outcomes among individuals
and organizational structures concerning training.
There are four levels in the Kirkpatrick Training Evaluation Model (KTEM);
namely, trainee reaction, learning, behavior, and result (Ho, Arendt, Zheng &
Hanisch, 2016). The Kirkpatrick model revealed substantial correlations between the
four stages of training effectiveness. However, only a small number of research studies
have strongly validated these linkages (Alsalamah & Callinan, 2021; Manzoor & Din,
2019). Research scholars reported that the organizations frequently neglect to assess
the behavior and result of the training effectiveness paradigm due to the challenges
involved in its assessment (Alliger & Janak, 1989; Clement, 1982; Homklin, 2014).
Moreover, it was also reported that during assessing the behavior, and result, the
participants’ responses were quite biased (Abdelhakim et al., 2018).
In spite of the fact that Kirkpatrick model has established the significant intercon-
nections among the four levels of training effectiveness, but very limited studies have
substantiated these relationships empirically (Alsalamah & Callinan, 2021; Baluku,
Matagi & Otto, 2020; Costabella, 2017). The Kirkpatrick approach is also mentioned
in numerous literatures, including research articles, novels, conference papers, and
gray literature, as per the literature review conducted for this study. Additionally, it
was shown that there aren’t enough studies worldwide that use the Kirkpatrick model
in this domain of social sciences.
The meta-analysis review has found that there are currently 298 research articles,
20 case studies, 48 conference papers, nine books, 49 reviews, one short survey, and
48 books associated with the Kirkpatrick paradigm in general (all fields). On the other
hand, there is only a single case study, sixteen conference papers, three books, twen-
ty-six reviews, no brief survey, and 123 research articles published up to this point in
the social sciences. The illustration shows how few papers linked to the Kirkpatrick
model were between 2011 and 2021. To sum up the discussion, up till now three gaps
are identified. First, mix findings discovered pertaining to the association between
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 37
four levels of Kirkpatrick model (Alsalamah & Callinan, 2021; Baluku, Matagi &
Otto, 2020; Costabella, 2017). Second, ignorance in measuring the level three, i.e.,
behavior and level four, i.e. result (Alliger & Janak, 1989; Clement, 1982; Homklin,
2014) and lastly, the existence of little studies pertaining to the Kirkpatrick model
particularly, in the social sciences domain.
Therefore, to bridge the current research gap, it is necessary to do a systematic
review (meta-analysis) of the four levels of the Kirkpatrick model. Here the focus
on evaluating the association between reaction towards learning, learning towards
behavior, and behavior towards the result. Therefore, the study objective is to do sys-
tematic review of the relationship among four levels of the Kirkpatrick model. This
study offers the accommodative literary work regarding Kirkpatrick’s four levels of
the training effectiveness. The study is focused to see the causal association between
KTEM; in the prior literature. Particularly, the study enhances the understanding
of causal association among four levels. The study assessments might be fruitful to
enhance the literature of the training modules and will propose the suitable human
resource development (HRD) strategies for companies. Moreover, the study may deliver
valuable knowledge of training effectiveness and the baseline for training evaluation
to academia and industry.
Figure 1: Published Studies (2011-2021)
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
38
2. Kirkpatrick Model-Meta-Analysis
The author performed a meta-analysis of studies on training effectiveness using
the Kirkpatrick model to examine the paradigm thoroughly. To conduct a systematic
review in the framework of the Kirkpatrick training, assessing model from 1982 to
2021, a total of 41 studies (n=41) was considered through a total sample (n=8825).
Thirty-three studies (n=33) had been associated with the trainee reaction (level-one)
and trainee learning (level-two), twenty-nine studies (n=29) were associated with
trainee learning (level-two) and trainee behavior (level-three), and just three studies
(n=3) were related to trainee behavior (level-three) and trainee result (level-four). The
following table provides a detailed breakdown of the journals, authors, and countries
used as the research and a systematic review foundation.
Figure:2 Kirkpatrick Literature in all Fields
Table 1: Studies Selected for Meta-Analysis
S# Authors Journal N Country
1 Clement (1982) Public Personnel Management 50 USA
2 Wexley & Baldwin (1986) Academy of Management Journal 120 USA
3 Baldwin (1992) Journal of Applied Psychology 72 USA
4 Warr & Bunce (1995) Personnel Psychology 106 USA
5 Cannon-Bowers et al. (1995) Military Psychology 1037 USA
6 McEvoy (1997) Society of Human Resources Mgt 140 USA
7 Fisher & Ford (1998) Personnel Psychology 121 USA
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 39
8 Warr, Allan & Birdi (1999) Journal of Occup’l & Org’l
Psychology
163 UK
9 Bates et al. (2000) Human Resource Development
Int’l
150 USA
10 Frayne & Geringer (2000) Journal of Applied Psychology 30 USA
11 Tracey et al. (2001) Human Resource Development
Quarterly
420 USA
12 Richman-Hisrich (2001) Human Resource Development
Quarterly
1335 USA
13 Gully et al. (2002) Journal of Applied Psychology 181 USA
14 Tan, Hall & Boyce (2003) Human Resource Development
Quarterly
283 USA
15 Liao & Tai (2006) Social Behavior Personality 132 USA
16 Savoldelli et al. (2006) Anesthe-
siology
42
17 Lim, Lee & Nam (2007) Int’l Journal of Information Mgt 170 Japan
18 Sulsky & Kline (2007) Int’l Journal of Training & Devel-
opment
65 Canada
19 Bell & Ford (2007) Human Resource Development
Quarterly
113 USA
20 Liebermann & Hoffman (2008) Int’l Journal of Training & Devel-
opment
213 Germany
21 Sitzmann et al. (2009) Academy of Management Pro-
ceedings
125 USA
22 Orvis et al. (2009) Journal of Applied Psychology 274 USA
23 Welke et al. (2009) Anesthesia and Analgesia 30 Canada
24 Grant et al. (2010) Clinical Simulation in Nursing 40 USA
25 Fisher et al. (2010) Journal of Applied Psychology 237 USA
26 Van Heukelom et al. (2010) Simulation in Healthcare 161 USA
27 Lin, Chen & Chuang (2011) Int’l Journal of Management 494 Japan
28 Boet et al. (2011) Critical Care Medicine 50 Canada
29 Shinnick et al. (2011) Clinical Simulation in Nursing 168 USA
30 Saks & Burke (2012) Int’l Journal of Training & Devel-
opment
150 Canada
31 Dreifuerst (2012) Journal of Nursing Education 238 USA
32 Chronister & Brown (2012) Clinical Simulation in Nursing 60 USA
33 Reed et al. (2013) Clinical Simulation in Nursing 64 USA
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
40
34 Mariani et al. (2013) Clinical Simulation in Nursing 86 USA
35 Homklin (2014) Int’l Journal of Training & Devel-
opment
228 Thailand
36 Grant et al. (2014) Nurse Education in Practice 48 USA
37 Reed (2015) Nurse Education in Practice 58 USA
38 Weaver (2015) Clinical Simulation in Nursing 96 USA
39 Liao & Hsu (2019) Int’l Journal of Mgt, Economics
& SS
393 Japan
40 Manzoor & Din (2019) Journal of Managerial Sciences 732 Pakistan
41 Zielińska-Tomczak et al. (2021) Nutrients 150 Switzer-
land
Note. Meta-Analysis Studies
2.1 PRISMA Model
The PRISMA (preferred reporting item systematic review and meta-analysis)
was used to carry out the systematic review (Moher et al., 2015). Page et al.(2021)
suggested PRISMA model for meta analytical review. The model is used for several
reasons a) PRISMA model aims to help authors improve the reporting of systematic
reviews, b) The PRISMA flow diagram visually summarizes the screening process,
and c) The PRISMA model is relevant for mixed-methods systematic reviews which
include quantitative and qualitative studies (Moher et al., 2015). The PRISMA con-
sisted of four parts, i.e., identification, screening, eligibility, and inclusion of studies.
The PRISMA is recognized as standard for reporting evidence in systematic reviews
and meta-analyses. The PRISMA a) demonstrate quality of review, b) allows readers
to assess weakness and strengths, c) allow replications of review and d) structure and
format the review (Moher et al., 2015).
2.1.1 Rational of Using PRISMA Model
In this study the researchers have used the PRISMA model for several reasons.
First, PRISMA model describe the contemporary state of knowledge, understand-
ing and relevant uncertainties (Sampson, Tetzlaff & Urquhart, 2011). Second, the
PRISMA model coherence the significance of the review (Deeks, 2002). Third,
PRISMA model assist the scholar to enhance the meta-analytical review (Hoffmann
et al., 2017). Fourth, PRISMA may also be useful for critical appraisal of published
systematic reviews, although it is not a quality assessment instrument to gauge the
quality of a systematic review (Sampson, Tetzlaff & Urquhart, 2011). Lastly, PRISMA
allows and reports the effort of intervention about the variables in prior literature
(Moher et al., 2015).
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 41
3. Selection Criteria (Study)
The studies were selected from the year 1982–2021. In the identification phase
a sum of (n=883) studies were found, i.e., (n=774) studies from database search, and
(n=109) extra records through other means. Out of (n=883) studies, (n=316) studies
were removed due to replica and (n=567) studies were screened out. In screening
phase, out of (n=567) studies about (n=443) articles had been left out due to record
replications and (n=124) studies were screened out and found eligible. During the
eligibility phase, (n=33) studies were eliminated because abstract not matched with
the study variables. In the inclusion phase out of (n=91) studies (n=50) studies were
not included because the studies were based on qualitative viewpoints. Finally, (n=41)
quantitative studies were included to conduct the meta-analysis. The PRISMA model
figurative representation is then described.
Figure 3: PRISMA Flow Diagram for Meta-Analysis
4. Methodology
The research philosophy was positivism and the systematic review was analyzed
via meta-analysis based on the studies related to the quantitative nature. The PRISMA
(preferred reporting item systematic review and meta-analysis) model was used to carry
out the systematic review (Moher et al., 2015). The Web of Science (WoS), Scopus,
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
42
and Science Direct are the search engines (Balstad & Berg, 2020) which were selected
as sources for searching the papers. The exclusion of Google Scholar was due to sev-
eral factors. First, compared to many other archives like WoS or Scopus, its indexing
practices are less stringent, often resulting in a less effective search results (Shareefa
& Moosa, 2020). Furthermore, the findings cannot be extracted, in contrast to the
majority of other resources like Science Direct and WoS (Moral-Munoz et al., 2020).
So, WoS, Scopus, and Science Direct were the sole three search resources used by
the investigator.
The research adopted three strategies to locate pertinent research
publications. To start, the investigator accessed three internet database systems: WoS,
Scopus, and Science Direct.
There are no date constraints because the date was set
to default. Title, Abstract, and Phrases were prioritized in the search parameters for
many indexing sources.
The investigator looked through the records up until 31st December 2021. Second,
the review searches internet resources for ancestors-related existing publications and
studies to find forebears. Third, the researcher assessed the descendancy of journals
that cited publications using the Kirkpatrick Model. N was the number of the sample
group for this inquiry. The development of this encoded framework is a component
of a more thorough meta-analysis and its relationships. By using Fisher z (hyperbolic
arctangent) transformation (z tanh-1 (r)) to analyze a Pearson correlation, the re-
searcher then used Steiger (1980) methods to compute the variance and covariance of
z-transformed values by using Hedges and Olkin (1985) meta-analysis procedure. The
investigator acquired and transformed the data sequentially into Pearson correlation
(r) (Schulze, 2004).
Researchers used random-effects test to analyze data as either a subset of a het-
erogeneous population from which they meant to draw inferences or even as a whole
group from which they hoped to draw generalizations (Borenstein et al., 2010). The
researcher fitted a random-effects model using the maximum-likelihood method
and the JASP software. The author analyzed the variety of direct population results
and provided reliability and accuracy ranges since they anticipated heterogeneity in
effect magnitude. The accuracy of the parameters calculated is reflected in the field
of possibilities within which researchers can be confident that the underlying mean
of the responses lies. The range in which the majority of path coefficients lie, or the
confidence intervals, show the variety of influence sizes for a population (Whitener,
1990). To achieve the highest level of precision in search queries, the terms used
across the question were selected and determined by several characteristics. This
study focuses on the Kirkpatrick model and how it could be used as an evaluation
system. These terms were part of the investigation since the Kirkpatrick model is “a
paradigm, structure, framework, typology, approach, and typographic (Holton, 1996,
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 43
p. 50). The investigator used a database, keyword search, tools, coding, or analysis
procedures to do a meta-analysis, as stated in Table 2.
Table 2: Method (Meta-Analysis)
S# Description Instrument Used
1 Analysis Correlation Pearson
Random Effects Method
Reaction (A)
Learning (B)
2 Coding Behavior (C)
3 Software Result (D)
JASP
4 Databases Used Scopus
Science Direct
Web of Knowledge
Note. Meta-Analysis Methods
4.1 Reaction to Learning
Thirty-three investigations (n=33) in total were calculated using a forest plot to
determine the association between trainee reaction (level-1) and learning (level-2).
The forest plot is a visual depiction of results from various research investigations
focusing on the same topic and their combined estimate (Lalkhen & McCluskey,
2008). Two columns display the forest plot. The studies’ names are listed in the left
column, usually in chronological sequence from top to bottom. Confidence intervals
are shown as horizontal lines in the figure in the right column that displays the odds
ratio measurement for each of these investigations. A vertical line that denotes no
effects also was evident. This line will be parallel to a range for independent research
if there is no influence at the point estimate. The same would be true for such an
influence gauge that was the subject of the meta-analysis. If the diamond’s vertices
cross the lines of no effects, the conclusion of the meta-analysis cannot be deemed
to differ from no product at the specified confidence level. Out of 33 investigations
(n=33), 30 (n=30) studies confirmed an excellent relationship between trainee reaction
(level-1) and learning (level-2), according to the forest plot data. However, only three
(n=3) investigations found a negative correlation between level-one trainee reaction
and learning (level-2). Due to the random effect estimation, the total value represents
the actual observed outcome of all studies, which is on the right and somewhat more
than zero (r=.23, CI [.07,.39]). The diamond at the bottom shows this. This suggests
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
44
that trainee reaction (level one) and learning have a good relationship (level two).
Additionally, the overall observed result across all investigations is on the right-hand
side and slightly above zero, indicating a positive correlation between trainee reaction
and learning (level-1) (level-2).
4.2 Learning to Behavior
The link between trainee learning (level 2) and behavior (level 3) was assessed
using a forest plot over a total of 29 experiments (n=29). Out of twenty-nine research
(n=29), according to forest plot data, it is discovered that twenty-six (n=26) studies
verified a good correlation between trainee learning (level-2) and behavior (level-3).
Only three researches (n=3) revealed a negative correlation between trainee learning
(level 2) and behavior (level 3). Based on the random effect estimation, the total value
shows the actual observed outcome of all studies, which is on the right and somewhat
more than zero (r=.28, CI [.17,.38]). The diamond at the bottom shows this. This
suggests that trainee learning (level-2) and behavior (level-3) positively correlated.
Additionally, the overall observed result across all studies is on the right-hand side
and slightly above zero, indicating a positive correlation between trainee learning
and behavior (level-2) (level-3).
4.3 Behavior to Result
In three studies, a total of n = 3 was calculated using a forest plot to determine
the association between trainee behavior (level-3) and result (level-4). According to
forest plot results, all three (n=3) investigations indicated a strong correlation between
trainee behavior (level-3) and result (level-4). Based on the random effect estimation,
the total value shows the actual observed outcome of all studies, which is on the right
and somewhat more than zero (r=.44, CI [.27,.1.15]). The diamond at the bottom
shows this. This indicates that there is generally a good correlation between trainee
behavior (level 3) and result (level 4). Additionally, the overall observed outcome of
the result is slightly above zero on the right-hand side, indicating a positive connection
between trainee behavior (level-3) and result (level-4).
4.4 Summary of Meta-Analysis
A comprehensive sample (n=8825) of papers (n=41) about the Kirkpatrick model
from 1982 to 2021 was considered in the meta-analytical review. Thirty-three (n=33) of
the forty-one studies had been concerned with trainee reaction (level-1) and learning
(level-2), whereas twenty-nine (n=29) were concerned with trainee learning (level-2)
and behavior (level-3), and just three had been involved with trainee behavior (level-3)
and result (level-4). The standard procedure, known as PRISMA, was used to conduct
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 45
the systematic review. The identification, selection, eligibility, and inclusion of stud-
ies were all parts of this procedure. As of 1982–2021, approximately n=774 studies
were found overall through evaluation and database searches, and an extra (n=109)
record were found from other sources (the Kirkpatrick model). The Web of Science,
Scopus, and Science Direct were selected as repositories for articles pertinent to the
study’s topic. The researcher fitted the random-effects model using the maximum
likelihood estimation and the JASP software. According to forest plot data, thirty
(n=30) research out of thirty-three papers (n=33) confirmed a connection between
trainee reaction (level-1) and learning (level-2). Comparatively, only three research
(n=3) found a negative correlation between trainee reaction (level-1) and learning
(level-2). Second, 26 studies out of the twenty-nine papers (n=29) indicated that there
is a positive correlation between trainee learning (level-2) and behavior (level-3). Com-
paratively, only three research (n=3) revealed a negative correlation between trainee
learning (level-2) and behavior (level-3). Third, it is discovered that all three (n=3)
investigations from the three papers (n=3) confirmed a positive correlation between
trainee behavior (level-3) and trainee result (level-4). The Figure (4, 5 and 6) shows
the association between level-1, i.e., a) reaction to learning, level-2, i.e., b) learning to
behavior and level-3, i.e., c) behavior to result. The meta-analysis review’s summary
is shown in Table 2.4.
Table 2.4: Summary (Meta-Analytic Review)
S# Variables Relationship Positive Relationship Negative Relationship
1 Reaction (Level-1) and Learning
(Level-2)
30 3
2 Learning (Level-2) and Behavior
(Level-3)
26 3
3 Behavior (Level-3) and Result
(Level-4)
3 0
4 Total Relationships Identified 56 6
5. Discussion
Numerous training, assessment studies have failed to identify obvious causal cor-
relations among four levels of training evaluating model (Alliger et al., 1997; Alliger
& Janak, 1989). The sequential ordering of training effectiveness has, though, rarely
been studied in training and assessment (Alliger et al., 1997). On the other hand,
only a small number of training investigations have shown some evidence in favor of
the hierarchical order correlation of all four stages (Liao & Hsu, 2019; Manzoor &
Din, 2019; Homklin, 2014; Saks & Burke, 2012; Alliger & Janak, 1989). According
to literature, there exist a mixed-findings pertaining to the Kirkpatrick four level re-
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
46
Figure 4: Forest Plot Outcome (Reaction to Learning)
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 47
Figure 5: Forest Plot Outcome (Learning to Behavior)
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
48
lationships (Alsalamah & Callinan, 2021; Baluku, Matagi & Otto, 2020; Costabella,
2017; Manzoor & Din, 2019). Therefore, the study meta-analytically analyzed the
relationships among Kirkpatrick’s four levels.
The PRISMA model was used to conduct the systematic review. The researcher
fitted the random-effects model using the maximum likelihood estimation. Initially,
the relationship between reaction and learning was identified via prior literature.
Based on the estimation, about thirty studies confirmed that there exist a positive
association exist between two initial levels, i.e., trainee reaction and learning. Second-
ly, the association between learning and behavior was identified via prior literature.
Based on the forest plot estimation, almost twenty six studies confirmed the positive
association between trainee learning and behavior. Lastly, association between behavior
and result was estimated based on the correlation values of the prior studies. It was
discovered that about three studies confirmed a positive correlation between trainee
behavior and trainee result. The study revealed majority of studies revealed that a
positive association exists between Kirkpatrick four levels of training evaluating model.
Figure 6: Forest Plot Outcome Behavior to Result
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 49
5.1 Theoretical Contribution
The authors have reported the several gaps in the current study. These gaps are
comprised of, first, mix findings revealed in the literary work pertaining to the link-
ages among four levels of Kirkpatrick model (Alsalamah & Callinan, 2021; Baluku,
Matagi & Otto, 2020; Costabella, 2017; Manzoor & Din, 2019). Second, neglecting
to efficacy of measuring the level three, i.e., behavior and level four, i.e. result (Alliger
& Janak, 1989; Clement, 1982; Homklin, 2014). Lastly, the existence of little studies
pertaining to the Kirkpatrick training evaluating model in the social sciences domain.
This research aimed to bridge gaps in the literature by first, taking the last thirty years
literary data pertaining to the Kirkpatrick model, i.e. year (1981-2021). Secondly,
by representing, incorporating and reporting the literary data via PRISMA model.
Thirdly, by evaluating the collected data via forest plot and estimating the correlation
values of the factors by random-fixed effect. Additionally, the investigator tried to
fill the empirical gaps by connecting Kirkpatrick four levels of training effectiveness.
The study mitigates the gaps of the mixed findings pertaining to the linkages among
four levels of Kirkpatrick model. This study offers the accommodative literary work
regarding Kirkpatrick’s four levels of the training effectiveness. Moreover, the study
may deliver valuable knowledge of training effectiveness and the baseline for training
evaluation to academia and industry.
6. Conclusion
The study’s goal was to undertake a systematic review (meta-analysis) of the
KTEM. The PRISMA model was used to carry out a systematic review to meet the
research goal. A complete sample (n=8825) considered 41 studies (n=41) about the
Kirkpatrick model from 1982 to 2021. The Web of Science, Scopus, and Science
Direct were selected as repositories for articles pertinent to the study’s variables. The
researcher fitted the random-effects framework using the maximum likelihood esti-
mation. According to the forest plot findings, most studies have established a positive
relationship between trainee reaction and learning. Secondly, many research studies
revealed that there is a strong relationship between trainee learning and behavior.
Third, a clear correlation between trainee behavior and trainee result was seen in
all three studies. The study results significantly impact practical investigations and
the HRD practitioners. The study evaluations can help the training programs and
suggest the best HRD tactics for advancing and strengthening its trainees. The study
findings have provided important data on training efficacy and essential standards for
assessing upcoming capacity-building strategies for training. The KTEM model may
provide substantial evidence that increases the transparency about training benchmark
selection and assessments. The four-level framework of evaluating training may be
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
50
used by experienced trainers, decision-makers, and relevant training administrators.
To ensure acceptance and successful training transfer, there must be a sufficient level
of perceived practical relevance. It is suggested that when undertaking or delivering
training courses, the training administrators should take into account and gauge the
training effectiveness using the Kirkpatrick framework, which measures trainees’ re-
action, learning, behavior, and result. To increase the efficiency of training, it is also
necessary to carefully assess the standards of social and professional assistance. The
research offers the accommodative literary work regarding Kirkpatrick’s four levels
of the training, evaluating model, including trainees’ reaction, trainees’ learning,
trainees’ behavior, and trainees’ result.
6.1 Limitations and Future Area
This study has few shortcomings that urge to be highlighted for studies in this
area. The meta-analysis was only conducted for the period of (1982-2021) by taking
only 41 quantitative based studies. This figure would slightly generalize the research
findings because the qualitative based studies are ignored due to the statistical na-
ture of systematic review. In future, the qualitative based study findings may also be
incorporated for investigation. Furthermore, researchers must consider employing a
mixed-method in the future to gain a thorough grasp of the Kirkpatrick model simul-
taneously. Additionally, a vast and varied sample in conjunction with sophisticated
data processing techniques might increase the possibility that the study’s findings
will be more generalizable.
References
Abdelhakim, A. S., Jones, E., Redmond, E. C., Griffith, C. J., & Hewedi, M. (2018). Evaluating cabin
crew food safety training using the Kirkpatrick model: an airlines’ perspective. British Food Journal.
120(7), 1574-1589.
Alliger, G. M., & Janak, E. A. (1989). Kirkpatrick’s levels of criteria: Thirty years later. Personnel Psychol-
ogy, 42(4), 331-341.
Alliger, G. M., Tannenbaum, S. I., Bennett, Jr., W., Traver, H., & Shotland, A. (1997). A meta-analysis
on the relations among training criteria. Personnel Psychology, 50(4), 341-358.
Alsalamah, A., & Callinan, C. (2021). Adaptation of Kirkpatrick’s four-level model of training criteria
to evaluate training programs for head teachers. Education Sciences, 11(3), 116.
Alsalamah, A., & Callinan, C. (2021). The Kirkpatrick model for training evaluation: bibliometric
analysis after 60 years (1959-2020). Industrial and Commercial Training. 1(2), 36-63.
Baldwin, T. T. (1992). Effects of alternative modeling strategies on outcomes of interpersonal-skills
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 51
training. Journal of Applied Psychology, 77(2), 147.
Balstad, M. T., & Berg, T. (2020). A long-term bibliometric analysis of journals influencing management
accounting and control research. Journal of Management Control, 30(4), 357-380.
Baluku, M. M., Matagi, L., & Otto, K. (2020). Exploring the link between mentoring and intangible
outcomes of entrepreneurship: the mediating role of self-efficacy and moderating effects of gen-
der. Frontiers in Psychology, 11, 1556.
Bates, R. A., Holton. E. F. III, Seyler, D. A. & Carvalho, M. A. (2000). The role of interpersonal factors
in the application of computer-based training in an industrial setting. Human Resource Development
International, 3(1), 19-43.
Bell, B. S., & Ford, J. K. (2007). Reactions to skill assessment: The forgotten factor in explaining moti-
vation to learn. Human Resource Development Quarterly, 18(1), 33-62.
Boet, S., Bould, M.D., Bruppacher, H.R., Desjardins, F., Chandra, D.B., & Naik, V.N. (2011). Looking
in the mirror: Self-debriefing versus instructor debriefing for simulated crises. Critical Care Medicine,
39, 1377-1381.
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2010). A basic introduction to fixed
effect and randomeffects models for metaanalysis. Research Synthesis Methods, 1(2), 97-111.
Cahapay, M. B. (2021). Kirkpatrick model: Its limitations as used in higher education evaluation. Inter-
national Journal of Assessment Tools in Education, 8(1), 135-144.
Cannon-Bowers, J. A., & Salas, E. (1997). A framework for developing team performance measures in
training. In Team performance assessment and measurement (pp. 57-74). Psychology Press.
Cannon-Bowers, J. A., Salas, E., Tannenbaum, S. I., & Mathieu, J. E. (1995). Toward theoretically based
principles of training effectiveness: a model & initial empirical investigation. Military Psychology,
7(2), 141-164.
Chronister, C., & Brown, D. (2012). Comparison of simulation debriefing methods. Clinical Simulation
in Nursing, 8(2), 69-81.
Clement, R. W. (1982). Testing the hierarchy theory of training evaluation: An expanded role for trainee
reactions. Public Personnel Management, 11(2), 176-184.
Costabell, L. M. (2017). Do high school graduates benefit from intensive vocational training?. International
Journal of Manpower. 4(3), 55-62.
Dachner, A. M., Ellingson, J. E., Noe, R. A., & Saxton, B. M. (2021). The future of employee develop-
ment. Human Resource Management Review, 31(2), 100732.
Deeks, J. J. (2002). Issues in the selection of a summary statistic for metaanalysis of clinical trials with
binary outcomes. Statistics in Medicine, 21 (11), 1575-1600.
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
52
Dreifuerst, K.T. (2012). Using debriefing for meaningful learning to foster development of clinical
reasoning in simulation. Journal of Nursing Education, 51, 326-333.
Fisher, S. L., & Ford, J. K. (1998). Differential effects of learning effort & goal orientation on two
learning outcomes. Personnel Psychology, 51, 397-420.
Fisher, S. L., Wasserman, M. E., & Orvis, K. A. (2010). Trainee reactions to learner control: an important
link in the elearning equation. International Journal of Training and Development, 14(3), 198-208.
Frayne, C. A., & Geringer, J. M. (2000). Self-management training for improving job performance: A
field experiment involving salespeople. Journal of Applied Psychology, 85(3), 361.
Grant, J.S., Dawkins, D., Molhook, L., Keltner, N.L., & Vance, D.E. (2014). Comparing the effectiveness
of video-assisted oral debriefing and oral debriefing alone on behaviors by undergraduate nursing
students during high-fidelity simulation. Nurse Education in Practice, 14, 479-484.
Grant, J.S., Moss, J., Epps, C., & Watts, P. (2010). Using video-facilitated feedback to improve student
performance following high-fidelity simulation. Clinical Simulation in Nursing, 6(2), 112-131.
Gully, S. M., Payne, S. C., Koles, K. L. K., & Whiteman, J. A. K. (2002). The impact of error training
and individual differences on training outcomes: An attribute-treatment interaction perspective.
Journal of Applied Psychology, 87(1), 143-155.
Hazan-Liran, B., & Miller, P. (2020). The relationship between psychological capital and academic adjust-
ment among students with learning disabilities and attention deficit hyperactivity disorder. European
Journal of Special Needs Education, 1-14.
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Academic press. New York.
Ho, A. D., Arendt, S. W., Zheng, T., & Hanisch, K. A. (2016). Exploration of hotel managers’ training
evaluation practices and perceptions utilizing Kirkpatrick’s and Phillips’s models. Journal of Human
Resources in Hospitality & Tourism, 15(2), 184-208.
Hoffmann, T. C., Oxman, A. D., Ioannidis, J. P., Moher, D., Lasserson, T. J., Tovey, D. I., ... & Glasziou,
P. (2017). Enhancing the usability of systematic reviews by improving the consideration and de-
scription of interventions. Business Management Journal, 35(8), 1-7.
Holton, E. F. (1996). The flawed four level evaluation model. Human Resource Development Quarterly,
7(1), 5–21.
Homklin, T. (2014). Training effectiveness of skill certification system: The case of automotive industry
in Thailand. Unpublished doctoral dissertation. Hiroshima University, Hiroshima, Japan.
Lalkhen, A. G., & McCluskey, A. (2008). Clinical tests: sensitivity and specificity. Continuing education
in anaesthesia critical care & pain, 8(6), 221-223.
Liao, S. C., & Hsu, S. Y. (2019). Evaluating a continuing medical education program: New world
Kirkpatrick model approach. International Journal of Management, Economics and Social Sciences
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 53
(IJMESS), 8(4), 266-279.
Liao, W.C. & Tai, W. T. (2006). Work-related justice, motivation to learn, & training outcomes. Social
Behavior & Personality, 34(5), 545-556.
Liebermann, S. & Hoffmann, S. (2008). The impact of practical relevance on training transfer: evidence
from a service quality training program for German bank clerks. International Journal of Training &
Development, 12(2), 74-86.
Lim, H., Lee, S. G., & Nam, K. (2007). Validating E-learning factors affecting training effectiveness. In-
ternational Journal of Information Management, 27(1), 22-35.
Lin, Y. T., Chen, S. C., & Chuang, H. T. (2011). The effect of organizational commitment on employee
reactions to educational training: An evaluation using the Kirkpatrick four-level model. International
Journal of Management, 28(3), 926.
Manzoor, S. R. & Din, Z.U (2019). Measuring the training effectiveness in the police sector of Pakistan:
A Kirkpatrick model intervention. Journal of Managerial Sciences, 13(2). 30-45.
Mariani, B., Cantrell, M. A., Meakim, C., Prieto, P., & Dreifuerst, K. T. (2013). Structured debriefing
and students’ clinical judgment abilities in simulation. Clinical Simulation in Nursing, 9(5), e147-e155.
McEvoy, G. M. (1997). Organizational change and outdoor management education. Human Resource
Management: Published in Cooperation with the School of Business Administration, The University of Michigan
and in alliance with the Society of Human Resources Management, 36(2), 235-250.
Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., ... & Stewart, L. A. (2015).
Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015
statement. Systematic Reviews, 4(1), 1-9.
Moral Muñoz, J. A., Herrera Viedma, E., Santisteban Espejo, A., & Cobo, M. J. (2020). Software tools
for conducting bibliometric analysis in science: An up-to-date Review. 29(1), 118-140.
Orvis, K. A., Fisher, S. L., & Wasserman, M. E. (2009). Power to the people: using learner control to
improve trainee reactions & learning in web-based instructional environments. Journal of Applied
Psychology, 94(4), 960-971.
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., ... & Moher,
D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Sys-
tematic reviews, 10(1), 1-11.
Patki, S., Sankhe, V., Jawwad, M., & Mulla, N. (2021). Personalised Employee Training. In 2021 Interna-
tional Conference on Communication information and Computing Technology (ICCICT) (pp. 1-6). IEEE.
Reed, S.J. (2015). Written debriefing: Evaluating the impact of the addition of a written component
when debriefing simulations. Nurse Education in Practice, 15, 543-548.
Reed, S.J., Andrews, C.M., & Ravert, P. (2013). Debriefing simulations: Comparison of debriefing with
Fahad Nawaz, Wisal Ahmed, Muhammad Khushnood
54
video and debriefing alone. Clinical Simulation in Nursing, 9, 88-103.
Richman-Hisrich, W. (2001). Post training interventions to enhance transfer: the moderating effects of
work environments, Human Resource Development Quarterly, 12(2), 105-120.
Saks, A. M., & Burke, L. A. (2012). An investigation into the relationship between training evaluation
and the transfer of training. International Journal of Training and Development, 16(2), 118-127.
Sampson, M., Tetzlaff, J., & Urquhart, C. (2011). Precision of healthcare systematic review searches in
a crosssectional sample. Research Synthesis Methods, 2(2), 119-125.
Samwel, J. O. (2018). Impact of employee training on organizational performance–case study of drilling
companies in geita, shinyanga and mara regions in tanzania. International Journal of Managerial
Studies and Research, 6(1), 36-41.
Savoldelli, G.L., Naik, V.N., Park, J., Joo, H.S., Chow, R., & Hamstra, S.J. (2006). Value of debriefing
during simulated crisis management: Oral versus video-assisted oral feedback. Anesthesiology, 105,
279-285.
Schulze, R. (2004). Meta-analysis-A comparison of approaches. Hogrefe Publishing.
Shareefa, M., & Moosa, V. (2020). The Most-Cited Educational Research Publications on Differentiated
Instruction: A Bibliometric Analysis. European Journal of Educational Research, 9(1), 331-349.
Shinnick, M.A., Woo, M., Horwich, T.B., & Steadman, R. (2011). Debriefing: The most important
component in simulation? Clinical Simulation in Nursing, 7, 77-90.
Sitzmann, T., Bell, B. S., Kraiger, K., & Kanar, A. M. (2009). A multilevel analysis of the effect of
prompting selfregulation in technologydelivered instruction. Personnel Psychology, 62(4), 697-734.
Steiger, J. H. (1980). Tests for comparing elements of a correlation matrix. Psychological bulletin, 87(2), 245.
Sulsky, L. M., & Kline, T. J. B. (2007). Understanding frame-of-reference training success: a social learning
theory perspective. International Journal of Training & Development, 11(2), 121-131.
Tan, J. A., Hall, R. J., & Boyce, C. (2003). The role of employee reactions in predicting training effec-
tiveness. outcomes. Human Resource Development Quarterly, 14(4), 397-411.
Tracey, J. B., Hinkin, T. R., Tannenbaum, S., & Mathieu, J. E. (2001). The influence of individual
characteristics & the work environment on varying levels of training outcomes. Human Resource
Development Quarterly, 12(1), 5-23.
Van Heukelom, J.N., Begaz, T., & Treat, R. (2010). Comparison of postsimulation debriefing versus
in-simulation debriefing in medical simulation. Simulation in Healthcare, 5(2), 91-97.
Velada, R., & Caetano, A. (2007). Training transfer: the mediating role of perception of learning. Journal
of European Industrial Training. 31(4), 283-296.
Warr, P., & Bunce, D. (1995). Trainee characteristics and the outcomes of open learning. Personnel
Kirkpatrick Model and Training Effectiveness: A Meta-Analysis 1982 To 2021 55
Psychology, 48(2), 347-375.
Warr, P., Allan, C., & Birdi, K. (1999). Predicting three levels of training outcome. Journal of Occupational
and Organizational Psychology, 72(3), 351-375.
Weaver, A. (2015). The effect of a model demonstration during debriefing on students’ clinical judg-
ment, self-confidence, and satisfaction during a simulated learning experience. Clinical Simulation
in Nursing, 11(1), 20-26.
Welke, T.M., LeBlanc, V.R., Savoldelli, G.L., Joo, H.S., Chandra, D.B., Crabtree, N.A., & Naik, V.N.
(2009). Personalized oral debriefing versus standardized multimedia instruction after patient crisis
simulation. Anesthesia and Analgesia, 109, 183-189.
Wexley, K. N., & Baldwin, T. T. (1986). Post training strategies for facilitating positive transfer: An
empirical exploration. Academy of Management Journal, 29(3), 503-520.
Whitener, E. M. (1990). Confusion of confidence intervals and credibility intervals in meta-analysis. Jour-
nal of Applied Psychology, 75(3), 315.
Zielińska-Tomczak, Ł., Przymuszała, P., Tomczak, S., Krzyśko-Pieczka, I., Marciniak, R., & Cerbin-Koczo-
rowska, M. (2021). How do dieticians on instagram teach? The potential of the Kirkpatrick model
in the evaluation of the effectiveness of nutritional education in social media. Nutrients, 13(6),
2005-2022.
... Kirkpatrick's evaluation model, developed in 1959, is an approach to evaluating the effectiveness of training in an organisation. It delineates four training outcome levels: reaction, learning, behaviour and results (Kirkpatrick & Kirkpatrick, 2006;Nawaz et al., 1982). The levels are in sequence that progress in difficulty in the evaluation process. ...
Article
Full-text available
The integration of makerspace principles presents a promising avenue for educators to implement impactful STEM project-based learning initiatives. To this end, a three-day workshop entitled "Introductory STEM Makerspace Workshop – Learning Through Making" was conducted for lower secondary science teachers, with a particular focus on electric circuits. Throughout this workshop, educators were furnished with the basic knowledge and skills to facilitate maker-centered project-based learning experiences in science education. Furthermore, participants were challenged to conceptualise and develop lessons incorporating maker-centered learning in both classroom settings and extracurricular activities. This study evaluated the effectiveness of this workshop for 18 teachers employing three of the four levels of Kirkpatrick’s Evaluation Model. At the first level, encompassing participants' reactions, data were collected through an online evaluation form and direct observation during the workshop sessions, revealing a predominantly positive response from the participants.. Subsequently, at the second level, which pertains to learning outcomes, participants' acquisition of skills was evaluated via examination of photographs and reflective submissions on Google Drive subsequent to each workshop session. Findings indicated that educators attained proficiency in fundamental tasks such as soldering, 3D printing design, and fabrication of basic electrical devices, exemplified through their application of parallel circuit concepts. As for the third level, concerning behavioural change, was assessed through voluntary feedback solicited from participants. Notably, four respondents reported implementing acquired knowledge and skills within their educational contexts at the time of data collection. Thus, this evaluation determines whether and to what extent the workshop's effectiveness is for the participants. Additionally, it aids in identifying strengths and shortcomings and serves as a decision-making tool for future improvements. Keywords: STEM makerspace, teachers’ training, evaluation, Kirkpatrick’s Evaluation Model
... availability, then the (trainees' behavior) with a weighted mean (1. 87) & a moderate degree of availability, after that the (trainees' reaction) with a weighted mean (1. 81) & a moderate degree of availability while the (organizational outcomes) came last with a weighted mean (1. ...
... This article's approach towards HCD is premised on Kirkpatrick's (1976) model. It is a model globally recognised for assessing training efficacy and consists of four levels, namely: (1) reaction, (2) learning, (3) behaviour and (4) results (Nawaz et al., 2023). The model's appropriateness for this study is premised on the fact that it is constructed by means of an effective and constructive technique to examine learning outcomes among individuals and organisational structures regarding training. ...
Article
Full-text available
Orientation: Retaining staff remains a daunting task for organisations, more so for millennials. They are significantly less satisfied in their jobs and are more likely to have perceptions that may negatively impact job satisfaction, commitment, engagement and turnover intentions (TIs). Research purpose: The aim of this study is twofold: firstly, it explores the effect of human capital development (HCD) on staff retention, and secondly, it examines the intervening function of job satisfaction, commitment and loyalty on the correlation between HCD and staff retention. Motivation for the study: Human capital development culture is related to a myriad of organisational behaviours impacting TIs. Research approach/design and method: A quantitative and cross-sectional research approach was adopted, employing an online survey with 210 respondents. SMART PLS 4 was used for analysis while structural equation modelling (SEM) was employed to gauge the structural correlations between the factors. Main findings: Firstly, structural equation modelling results revealed that HCD has a positive but small as well as non-significant impact on staff retention. Secondly, the mediation inquiry revealed that job satisfaction and commitment intervene in the correlation between HCD and staff retention or TI, except for loyalty. Practical/managerial implications: The pursuit of an HCD culture aligned to organisational goals is a necessary remedy to not only advance sustainable efficiencies and success but also enhance staff retention. Contribution/value-add: This empirical research evidence provides a much broader perspective on the role of HCD on the nexus between multiple organisational behaviours related to TIs. Entrenching HCD culture in HR policies and practices could result in desired organisational outcomes.
... Level 3 (behavior): This stage examines whether the course has influenced a change in the behavior of the learners. Level 4 (results): This stage evaluates the impact of conducting this course on organizational indicators [14,15]. By employing Kirkpatrick's evaluation model, program evaluators can ensure a comprehensive and reliable assessment of the program's effectiveness. ...
Article
Full-text available
Background Pre-hospital trauma life support (PHTLS) training courses have been developed and widely adopted to enhance the proficiency of pre-hospital personnel in handling trauma patients. The objective of this study was to assess the effectiveness of the educational program for managing trauma patients in the pre-hospital emergency setting, utilizing Kirkpatrick’s educational evaluation model. Methods This is an observational approach, consisting of four sub-studies. The PHTLS course was conducted over a 2-day period, encompassing both theoretical and practical components. For this study, we selected pre-hospital personnel from three emergency aid stations using a convenient sampling method. These personnel underwent their first-ever PHTLS course training, and we subsequently analyzed the effectiveness of the training program using Kirkpatrick’s four levels of evaluation: satisfaction, learning, behavior, and results. Results The study conducted on Kirkpatrick’s first-level analysis revealed that participants expressed a high level of satisfaction with the quality of all aspects of the course. Moving on to the second and third levels, namely learning and behavior, significant improvements were observed in the average scores of various skills that were examined both immediately after the course and 2 months later (P < 0.05). However, when it comes to the fourth level and the impact of the course on indicators such as mortality rate and permanent disability, no significant changes were observed even after an average of 3 months since the course was introduced. Conclusion The implementation of PHTLS has been linked to the enhancement of participants’ skills in treating trauma patients, leading to the application of acquired knowledge in real-life scenarios and a positive change in participants’ behavior. The evaluation of PHTLS courses in Iran, as in other countries, highlights the need for specialized training in pre-hospital trauma care. To ensure the continued effectiveness of the PHTLS course, it is advisable for managers and policymakers to encourage regular participation of PHTLS employees in the program.
Article
Full-text available
The aim of the study was to investigate the chain of association amongst the four-level model of Kirkpatrick's training effectiveness with the integration of individual and work-related characteristics within the police sector of Pakistan. The questionnaires were distributed amongst the trainees of the police department in Khyber Pakhtunkhwa, Punjab, and Sindh provinces of Pakistan. The collected data was computed and analyzed via statistical software, namely SPSS and AMOS. factor analysis, structure equation model, and path analysis tools to measure the result. The statistical consequence exhibited that there exists a significant chain of relationships within the four levels of the training effectiveness model, and trainee self-efficacy, learning motivation, and social support were found to be considerable moderators of the training effectiveness model. Moreover, trainee motivation was found to be an insignificant moderator in relation to our level of the training effectiveness model. The study concluded that trainers must consider the importance of individual and work-related characteristics when giving training. The outcome of this study was projected to deliver a valuable contribution to the theoretical and academic research investigation and the professionals of human resource development (HRD) working in Pakistan.
Article
Full-text available
Purpose A number of studies on Kirkpatrick’s four-level training evaluation model have been published, since its inception in 1959, either investigating it or applying it to evaluate the training process. The purpose of this bibliometric analysis is to reconsider the model, its utility and its effectiveness in meeting the need to evaluate training activities and to explain why the model is still worth using even though other later models are available. Design/methodology/approach This study adopts a “5Ws+1H” model (why, when, who, where, what and how); however, “when” and “how” are merged in the methodology. A total of 416 articles related to Kirkpatrick’s model published between 1959 and July 2020 were retrieved using Scopus. Findings The Kirkpatrick model continues to be useful, appropriate and applicable in a variety of contexts. It is adaptable to many training environments and achieves high performance in evaluating training. The overview of publications on the Kirkpatrick model shows that research using the model is an active and growing area. The model is used primarily in the evaluation of medical training, followed by computer science, business and social sciences. Originality/value This paper presents a comprehensive bibliometric analysis to reconsider the model, its utility, its effectiveness in meeting the need to evaluate training activities, its importance in the field measured by the growth in studies on the model and its applications in various settings and contexts.
Article
Full-text available
The growing popularity of health education on social media indicates the need for its appropriate evaluation. This paper aims to present the potential of the Kirkpatrick Model (KM) with New World Kirkpatrick Model (NWKM) additions to evaluate the nutritional education provided by dieticians via Instagram. Instagram profiles of ten dieticians providing nutritional education for their followers were analyzed in March and April 2021. The study sample included profiles of both macro- and micro-influencers. The analyzed quantitative data included Instagram Engagement Rate and the number of likes and comments per post. The qualitative analysis of the comments was performed following the theoretical framework provided by the KM and NWKM. Collected data showed followers’ satisfaction, commitment, and relevance of the presented content, fulfilling the Level 1 of NWKM. Level 2 of NWKM was represented by 4 out of 5 dimensions (knowledge, attitude, confidence, commitment). No comments were found only for skills. Both Levels 3 (Behavior) and 4 (Results) of the KM were met. However, the use of the NWKM for them seems limited. The KM can be used to evaluate nutritional education on social media. The NWKM additions seem applicable mostly for Levels 1 and 2.
Article
Full-text available
Training programmes are evaluated to verify their effectiveness, assess their ability to achieve their goals and identify the areas that require improvement. Therefore, the target of evaluators is to develop an appropriate framework for evaluating training programmes. This study adapted Kirkpatrick’s four-level model of training criteria published in 1959 to evaluate training programmes for head teachers according to their own perceptions and those of their supervisors. The adapted model may help evaluators to conceptualise the assessment of learning outcomes of training programmes with metrics and instruments. The model also helps to determine the strengths and weaknesses of the training process. The adaptation includes concrete metrics and instruments for each of the four levels in the model: reaction criteria, learning criteria, behaviour criteria and results criteria. The adapted model was applied to evaluate 12 training programmes for female head teachers in Saudi Arabia. The study sample comprised 250 trainee head teachers and 12 supervisors. The results indicated that the adapted Kirkpatrick evaluation model was very effective in evaluating educational training for head teachers.
Article
Full-text available
One of the widely known evaluation models adapted to education is the Kirkpatrick model. However, this model has limitations when used by evaluators especially in the complex environment of higher education. Addressing the scarcity of a collective effort on discussing these limitations, this review paper aims to present a descriptive analysis of the limitations of the Kirkpatrick evaluation model in the higher education field. Three themes of limitations were found out: propensity towards the use of the lower levels of the model; rigidity wich leaves out other essential aspects of the evaluand; and paucity of evidence on the causal chains among the levels. It is suggested that, when employing the Kirkpatrick model in higher education, evaluators should address these limitations by considering more appropriate methods, integrating contextual inputs in the evaluation framework, and establishing causal relationships among the levels. These suggestions to address the limitations of the model are discussed at the end of the study.
Article
Full-text available
A long line of research in Positive Psychology highlights Psychological Capital (PsyCap) as a key factor in forecasting post-secondary students’ academic adjustment. Such work, however, focuses on the adaptation of typical students. Studies with a focus on exceptional students, such as those with learning difficulties, pinpoint their marked disadvantages but have not considered PsyCap as a positive construct in their academic adjustment. This study compared the PsyCap and academic adjustment of 251 students with learning disabilities (LD, 40 males and 83 females, mean age 26), or attention deficit hyperactivity disorder (ADHD, 31 males and 97 females, mean age 27) to 250 typical counterparts (83 males and 167 females, mean age 25). It was designed to clarify whether PsyCap mediated the relationship between participant group and academic adjustment. Findings suggested LD/ADHD students manifest lower levels of PsyCap and academic adjustment. PsyCap was a significant mediator in the relationship between participant group and academic adjustment. Findings also pointed to a strong, positive relationship between PsyCap and academic adjustment, with a uniform magnitude for the two groups. The importance of the findings are discussed with direct reference to PysCap’s role in post-secondary academic adjustment.
Article
Full-text available
Entrepreneurship education is increasingly becoming a focal strategy for promoting entrepreneurship, particularly to foster entrepreneurial intentions and startups. However, learning and support are equally important after startup for novice entrepreneurs to gain a good level of confidence to manage their business and achieve the desired outcomes. Using a sample of 189 young self-employed individuals in Uganda, this study examines the differential impact of mentoring and self-efficacy on the achievement of intangible outcomes of entrepreneurship including satisfaction of need for autonomy, work satisfaction and the intention to stay in self-employment. We found self-efficacy to mediate the effects of mentoring on these intangible outcomes. In addition, the results showed substantial gender differences. Whereas women’s satisfaction of the need for autonomy and intention to stay in self-employment were strongly associated with the direct effects of mentoring, their male counterparts seemed to benefit more if mentoring resulted in increased self-efficacy. Overall, our findings suggest that whereas mentoring improves the competence of small business owners and consequently achievement of superior outcomes, mentoring should also focus on boosting self-efficacy which in turn is essential for the application of the entrepreneurial competencies.
Article
Full-text available
The amount of empirical research conducted in the area of differentiated instruction (DI) is overwhelming, necessitating this bibliometric analysis in order to produce an overview of literature on the topic. The objective of this study is to identify the characteristics of the most-cited educational research published on the topic of DI using science mapping and multi-dimensional bibliometric analysis methods. To answer the research questions which were related to: i) publication, ii) authorship, iii) authors’ keywords, and iv) journals, a total of 100 articles published between 1990 and 2018, generated from SCOPUS, were analysed. The results showed that the most-cited articles and the number of publications were highest between 1995 and 2011. With a total of 545 citations “A Time for Telling”, published in the Journal of Cognition and Instruction (1998), was the most cited. The most significant keywords were: a) differentiated instruction, b) differentiation, c) curriculum, d) mathematics, and e) reading. The analysis showed that there were 283 authors who contributed to the 100 articles, and amongst them Carol McDonald Connor was the greatest contributor. It was also revealed that the great majority of the most-cited publications were from Q1-ranked journals. These findings inform scholarly efforts adopted in developing a diverse knowledge base in the field. The findings are important to scholars as they provide an overview of the progress of research on the topic of DI.