Content uploaded by J. Kevin Ford
Author content
All content in this area was uploaded by J. Kevin Ford on Dec 01, 2015
Content may be subject to copyright.
How Much Is Transferred From
Training to the Job? The 10%
Delusion as a Catalyst for Thinking
About Transfer
J. Kevin Ford, PhD, Stephen L. Yelon, PhD, and Abigail Q. Billington
T
hereis a common belief in thetraining field that
only a small amount of what is taught in a
training program is actually transferred to the
job. For example, Burke and Hutchins (2007) note that
researchers suggest that little of what is learned in
training programs is transferred to the job to meet
organizational objectives. In fact, there are a number of
recent statements expressing that belief in terms of
percentages:
One estimate suggests that employees transfer
less than 10% of training and development ex-
penditures back to their workplace. (Brown,
2005, p. 369)
Some researchers and practicioners (sic) indi-
cate that less than 10% of trained skills are
transferred to the job context. (Aik & Tway,
2005, p. 112)
While approximately $100 billion are spent
annually on organizational training programs,
only an estimated 10 per cent of this investment
results in actual behavioral change on the job.
(Elangovan & Karakowsky, 1999, p. 268)
Unfortunately, estimates suggest that only 10% of learning actually
transfers to job performance. (Lim & Morris, 2006, p. 85)
The American Society of Training and Development estimates that U.S.
organizations spent $125.88 billion on employee learning and development
7
PERFORMANCE IMPROVEMENT QUARTERLY, 24(2) PP. 7–24
& 2011 International Society for Performance Improvement
Published online in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/piq.20108
This article explores the common
belief that only a small amount of what
is taught in a training program is actu-
ally transferred to the job. After provid-
ing evidence of the source of the
generalization and the acceptance of
the notion despite the lack of empirical,
behavioral evidence, we take the op-
portunity to examine the likely reasons
for that acceptance. We present five
questionable assumptions behind the
generalization about minimal transfer.
Based on this analysis, we offer four
practical strategies for planning, asses-
sing, and reporting training transfer.
These strategies include investigating
and accounting for variables influen-
cing transfer, expanding the definition
of use, stating realistic transfer goals,
creatingspecifictransferobjectives, de-
scribing observable indicators of use,
setting quantitative standards of suc-
cessful transfer, and reporting the com-
plete transfer story. These strategies
provide avenues for producing a more
accurate picture of the training transfer
experience.
in 2009 (American Society of Training and Development, 2010). If less than
10% of those funds produce behavioral changes on the job, logically, these
arguments suggest that 90% of what is spent on training is wasted.
Practitioners and researchers have invoked the notion that only a small
percent of what is learned is transferred to justify the need for more empirical
studies on the factors that can affect transfer (Baldwin & Ford, 1988) and to
spur trainers to be more effective in promoting transfer (Broad & Newstrom,
1992). However, in general, writers do not mention the research basis, or the
lack of it, for the 10% statement, nor what it means to say that 10% of what was
trained was transferred to the job. Thus, the purpose of this article is to raise
awareness of the issue, to explore it, and to learn from its analysis.
Acceptance of the 10% Belief
What is the origin of the 10% transfer statement? In 1982, Georgensen
made a hypothetical statement when discussing the cost of training. For the
sake of his argument, he stated,
How many times have you heard training directors sayy ‘‘I would
estimate that only 10 percent of content which is presented in the
classroom is reflected in behavioral change on the job. With the
increased demand from my management to demonstrate the effec-
tiveness of training, I’ve got to find a way to deal with the issue of
transfer.’’ (p. 71)
Georgensen (1982) then went on to describe ways to proactively address
transfer issues in the workplace.
In the first major integrative review of the empirical research on transfer,
Baldwin and Ford (1988) used the following statement to motivate readers to
attend to the importance of understanding the factors impacting transfer: ‘‘It
is estimated that while American industries annually spend up to $100 billion
on training and development, not more than 10% of these expenditures
actually result in transfer to the job’’ (p. 63). Since then, for more than 20
years, practitioners and researchers have cited Georgensen, Baldwin and
Ford, and each other regarding the limited extent of training transfer. In fact,
we found 35 articles and chapters on training that have noted the 10% figure;
the list of citations is available from the authors. These citations are partial
evidence of how well the statement has been accepted in spite of the lack of
empirical evidence to support it.
Although this statement came to light in 1982 when Georgenson
suggested a hypothetical situation, we have found only two researchers
who published a response to the generalization by asking for evidence for the
10% statement. In 2001, Fitzpatrick voiced his concern about the unsub-
stantiated assertion. Fitzpatrick said, ‘‘Georgenson had no need to, and did
not, cite any evidence or authority for the 10% estimate; it is clear that he had
used a rhetorical device to catch the reader’s attention. The estimate may or
8 DOI: 10.1002/piq Performance Improvement Quarterly
may not be accurate; it seems plausible but not compellingly soy’’ (p. 18).
But the idea persisted. Furthermore, believing the percentage was an under-
estimate, Saks (2002) had 150 members of a training and development
society estimate the amount transferred in their organizations. They were
asked to indicate what percentage of traineesy
yeffectively apply and make use of what they learn in training
programs on the job immediately after, six months after, and one year
after attending a training program. On average, respondents indicated
that 62%, 44%, and 34% of employees apply what they learn in training
on the job immediately after, six months after, and one year after
attending training. (p. 30)
Consequently, we have two guesses (10% and 62%) that are not based on
observed behavioral measures of transfer. Saks noted appropriately that the
estimates in his article were based on judgments, and not on more objective
empirical data. It is true that some authors refer to ‘‘estimates in the field.’’
However, the word ‘‘estimates’’ should have been a warning to readers of the
lack of direct empirical evidence based on behavioral changes on the job.
Performance technologists should attend to and analyze this phenom-
enon. Tolerating this well-known, unsubstantiated notion is tantamount to
admitting, without evidence, that trainers are incompetent and are wasteful
of precious resources, managers are justified in curtailing funds for training,
workers should avoid training, and researchers can say whatever they want
about training. We need to explain this occurrence and act to counter similar
events.
Therefore, first, we present a theoretical explanation for the acceptance
of the 10% belief. If we can understand the reasons, we can avoid succumbing
to them. Second, performance technologists should refute the notion. Of
course, we can go a long way by pointing out that there is no empirical
behavioral evidence mentioned to support the statement. But to understand
the dynamics of the phenomenon, we speculate about what someone would
have to think about transfer to even consider the idea of 10% transfer as a
possibility. If we study the flawed assumptions behind the generalization, we
may be able to counter them with clearer thinking and action. Thus, we
discuss five questionable assumptions underlying the belief.
Third, to counter the faulty thinking noted in the previous section,
we offer four practical strategies for planning, assessing, and reporting
training transfer that account for the realities and the needs of training
and transfer. These strategies may produce a more accurate picture of
training transfer.
A Sticky Idea
To avoid making dubious statements about transfer, and to avoid
accepting them, we need to find some plausible hypotheses for their
acceptance. Why is the statement regarding 10% transfer so believable in
spite of the lack of evidence to support it? We present one possible theory—
Volume 24, Number 2 / 2011 DOI: 10.1002/piq 9
the10% figure is a sticky concept. The term sticky was introduced by Chip and
Dan Heath in their 2007 book, Made to Stick: Why Some Ideas Survive and
Others Die. They hypothesized that people accept and keep ideas with certain
features, while ideas that lack these features fade from people’s minds. They
conclude that ideas have to be simple, unexpected, concrete, credible,
emotional, and tell a good story. We conjecture that these six principles
function to make a sticky idea out of the statement that only 10% of what is
trained is transferred.
Regarding simplicity, the Heaths note that to form a sticky idea, one
needs to strip down a complex issue to its core element. The 10% idea is
simple, straightforward, and easy to understand: ‘‘Ten percent of what is
taught is used.’’
The principle of unexpectedness means that a sticky idea violates
people’s expectations and causes surprise. Surprise, in turn, prompts atten-
tion and heightens curiosity. The 10% notion is also unexpected; in fact, it is
somewhat shocking: ‘‘Only 10%!’’ It is also something that trainers would like
to know: ‘‘What happens back on the job with all that I’ve taught? Do trainees
ever think about the content again? Do they use the procedures and apply the
principles?’’
Concreteness means that a sticky idea must be clear. In particular, the
way the idea is expressed produces vivid imagery. The 10% statement seems
concrete; it creates vivid images—a small chunk of something, a tiny piece of
the pie.
The principle of credibility focuses on believability. In particular, people
in authority can propose an idea that people will accept without skepticism.
The 10% statement seems credible; it was mentioned in a review of the
literature in a professional, peer-reviewed journal, and it has been mentioned
over and over in many articles, chapters, and books on transfer.
Emotions can also play a role in how sticky an idea is. An idea that spurs
feelings of happiness or disgust is going to be stickier than one that is factual
but bland. The 10% idea also leads to an emotional response: ‘‘My goodness,
all that work, time, and money for training and only 10% is used!’’
Finally, an idea that leads people to think immediately of stories
consistent with the idea is likely to have a powerful impact on retention.
The 10% notion is likely to be reinforced by the personal experience of
participants who gained little from training—‘‘I went to a workshop and did
not find much of value in it.’’
Not only does the idea seem sticky, but the researchers who quoted the
phrase were imitating other investigators. Further, writers may have felt that
the phrase ‘‘estimates in the field’’ did not require empirical behavioral
evidence. Thus, Heath and Heath’s (2007) concept of ‘‘stickiness,’’ the
principle of modeling, and the use of the term ‘‘estimates’’ may help, in part,
to explain why the 10% notion has been so readily accepted. Thus, it is
understandable how researchers would repeat the notion. The lesson: We
should be wary about accepting and imitating sweeping statements, even if
they are sticky. Performance technologists should continue to question
generalizations even from authoritative sources.
10 DOI: 10.1002/piq Performance Improvement Quarterly
Faulty Assumptions Reveal Problems With the Idea
Stickiness and modeling provide possible explanations of the longevity of
the 10% idea in spite of the lack of empirical evidence. But there are inherent
problems with the 10% notion even as an estimate. We believe that faulty
assumptions are the logical basis for the belief that a complex phenomenon
like transfer can be reduced to a percentage. If we can understand these
underlying assumptions, we can counter them.
1. Percentage of use is a meaningful way to represent transfer.
2. All training and job conditions are alike, thus, enabling valid
comparisons across various types of training as to how much has
been transferred.
3. All trainers and performance technologists have a common defini-
tion for successful transfer.
4. Performance in the transfer setting is easily measurable in a
quantitative way for any kind of training program.
5. The low rate of transfer implied by the 10% figure represents an
undesirable outcome and a poor financial investment, regardless of
the program.
Percentage of Use Is a Meaningful Way to Represent Transfer. The 10%
statement has no clear referent. Ten percent could mean many things. Ten
percent could mean that all trainees used one of 10 procedures taught; or all
applied some varying proportion of procedures that averaged to 10%. It could
mean that one trainee in 10 tried all the procedures taught; or one in 10 used
one of the procedures perfectly, or one in 10 used the most valuable
procedures. It could mean that one in 10 used desired procedures, but
each one used a different procedure than the others.
In addition, we are not clear as to what sort of performance was
transferred—a single idea, principle, skill, attitude, or approach—or many
ideas, principles, skills, or approaches (Kraiger, Ford, & Salas, 1993). Given
the doubts about meaning, should we be concerned that results varied across
workers?
A brief statement about percentage of use is not informative. What we
need are clear, understandable statements that tell us what did happen on the
job after training.
All Training and Job Conditions Are Alike. The statement ‘‘10% of what is
taught is applied on the job’’ refers to any sort of training for any type of job.
Tomake this broad assertion, one mustbelievethat researchers identified the
percent of use for each training program and job condition and those
percentages averaged to 10%. Only if all training programs and job
conditions were similar could an average percent of use across all training
programs make any sense.
However, there is considerable variation in jobs, from highly supervised,
relatively simple, routine jobs, to strongly independent, complex work
requiring adaptation (Yelon & Ford, 1999). Training programs can teach
Volume 24, Number 2 / 2011 DOI: 10.1002/piq 11
closed skills—for example, how to replace a light bulb on a truck—or open
skills, or how to apply employee motivational strategies. Thus, it is hard to
understand how one general statement could explain the extent of transfer
across different types of training and job conditions.
Transfer is not only affected by the types of tasks trained but also by
differences in work conditions, such as degree of supervision, opportunity to
use the knowledge learned, extent of adaptation required, or amount of
physical and psychological support (Blume, Ford, Baldwin, & Huang, 2010).
For example, for a uniform process required and accounted for daily, it is
likely there will be transfer—or the person might be out of a job. In contrast,
likelihood of transfer might be predictably lower when the worker can choose
to apply the skill or not and when the task is a skill requiring adaptation. In
sum, meaningful statements about the extent of transfer have to be qualified
by the kind of task, such as type of skill, and type of work conditions.
Transfer Is Transfer. A third assumption behind the statement about percent
of transfer is that researchers and practitioners agree that successful transfer
has the same meaning across training programs in all trades and professions.
It’s likely that professionals agree on the general idea of transfer—trainees
use at work what they were trained to do. However, it is also likely that the
definition of successful use of an idea varies considerably across trainers.
Some may consider transfer a success if trainees simply try a new skill. Some
may desire effective performance at least once. Others may require high
levels of performance at each opportunity. Even others may seek adaptation,
or the rejection of an inappropriate application, or the teaching of a coworker
the skill, or the application of an idea in another context. Thus, it is unlikely
that there is one clear definition of successful transfer across training
programs, trainees, job categories, and tasks. Instead we need to expand
our thinking about what the term transfer could signify and define in each
case what we mean in order to measure and report clearly.
Transfer Is Easily Observable and Precisely Measurable. The statement ‘‘10%
of material is transferred’’ assumes measurement precision—not 9% or 11%,
but 10%. It implies that trainers can easily observe and measure transfer and
reduce the measures to precise percentages that mean something specific.
However, depending on what is being trained, it is more or less difficult and
costly to attempt to measure transfer in a reasonable and valid way (Barnett &
Ceci, 2002; Ford & Schmidt, 2000). Producing a meaningful statement about
transfer is a rigorous and expensive process. Trainers have to specify job
conditions and performance, derive measures, and plan data collection,
processing, and interpretation. At best, for some tasks, trainers can readily
specify, categorize, and count behaviors. At worst, there are tasks that are
hard to observe or measure, such as mental processes and adaptive skills.
Even investigators employing the most rigorous methods qualify their
results, citing the error inherent in measurement as well as the type of
behavior and surrounding circumstances. Therefore, while we cannot hope
for maximum precision, we can specify what we want to observe and count;
12 DOI: 10.1002/piq Performance Improvement Quarterly
create strategies for making learning and transfer more observable and
measurable; and also interpret our results in light of measurement error, type
of task, and circumstances.
A Low Rate of Transfer Represents a Bad Outcome and a Poor Financial
Decision. The 10% statement carries the connotation that a small percent of
use is a bad outcome. But there are programs where trainers would be
satisfied to see all learners apply one powerful
principle or procedure out of many taught, or
see one learner in 10 use all key ideas, or see all
learners apply different ideas. In those
programs trainers understand that their
trainees begin a course with diverse needs and
experiences (Baldwin, Ford, & Blume, 2009;
Goldstein & Ford, 2002). They realize that
each person does not need the same
knowledge to improve performance; they are
aware that their trainees may choose among
offerings to apply what they perceive as useful;
they know of the variation of job conditions that
may influence transfer.
Yelon, Sheppard, Sleight, and Ford (2004) documented one such pro-
gram. They studied a course in which physicians were being trained to teach
medical students. The physicians came to the program with the freedom to
choose what to use from training and with varied goals, work conditions,
motivations, knowledge, and experience. The researchers found that these
physicians decided what to use based upon what they needed, what was
practical, and what they believed would be effective for their work. Some
selected one procedure, some a few principles, and some complete protocols,
depending on fit to their personal needs. For their work and training
conditions, application of all ideas by all learners was neither realistic nor
desirable. Thus, performance technologists should not assume that a low
rate of transfer is bad, but instead explore the circumstances for what is
possible and acceptable.
Consider the issue of low transfer rate from another viewpoint. If one
assumes that 10% transfer is bad, then a perfect record of transfer would be
good. Transfer perfection implies that all trainees in all programs should
learn and use everything taught, regardless of the variety of the type of task,
their needs and motives, their range of experiences, their job conditions, and
their responsibilities and opportunities. Perfection implies that training can
and should have a uniform influence on all those trained, rather than a
differential effect on each person depending on need. It’s understandable
that for professions that seek flawless, uniform performance, trainers would
set the highest standards for their graduates and would be willing to spend
extraordinary resources to achieve them, regardless of learners’ entering
knowledge, and be quite disappointed if they didn’t succeed. We can imagine
professions where that would apply, such as an airline pilot, but we wonder if
In reporting the results of
transfer across programs,
we need to consider
varying training and job
conditions, diverse
definitions of successful
transfer, measurement
constraints, and qualified
interpretations of the
extent of use.
Volume 24, Number 2 / 2011 DOI: 10.1002/piq 13
perfect transfer would be a necessary and realistic strategy for most training.
Whether a low or high rate of transfer is good or not depends on context.
In summary, there are several lessons we learned from considering the
assumptions seemingly underlying the 10% notion. When making general-
izations about transfer, we need to represent transfer accurately and mean-
ingfully. In reporting the results of transfer across programs, we need to
consider varying training and job conditions, diverse definitions of successful
transfer, measurement constraints, and qualified interpretations of the
extent of use. These lessons have implications for how to draw an accurate
picture of the state of the art of transfer.
Toward a More Accurate Picture of Transfer: Four Strategies
Let us state conclusions about transfer, based on research, that tell a
meaningful but complex story. Toward that end, we explore four strategies
for planning, assessing, and reporting transfer to continue progress toward
more accurate, more nuanced statements of the extent of transfer. We will
use two examples. In one, military mechanics learn to troubleshoot an
electrical problem. In the other, physicians learn to teach medical students.
Strategy 1: Identify the Factors Likely to Influence Transfer. Scholars have
reported factors affecting transfer, such as job and task conditions, learner
characteristics, and training features (Baldwin et al., 2009; Blume et al., 2010).
Trainers identifying the presence and strength of these factors may estimate
the likelihood and extent of transfer for a specific training program and, in
retrospect, may provide hypotheses about likely influences on the amount
and type of transfer they observe.
For example, the military mechanics’ trainers could investigate job
conditions that might influence a new group of recruits to use troubleshoot-
ing skills. They may investigate the supervision style of the mechanics’
supervisors because they can have a major impact on transfer (Ford,
Quin
˜
ones, Sego, & Sorra, 1992). If trainers find supervisors who immediately
require new mechanics to perform the full array of skills learned and pair the
new mechanics with experienced coworkers, trainers might predict a likely
positive influence on transfer. However, if supervisors assign only basic tasks
to mechanics working alone, the trainers might predict a negative effect on
transfer of the full set of skills.
In contrast, supervisors’ behaviors are not relevant to the physician-
teachers’ applications, because physicians are autonomous professionals
with leaders who do not direct their teaching. Thus, their trainers would
explore job conditions that influence personal choice, such as physicians’
time, skill, and teaching responsibilities. Most physician-teachers have
considerable time constraints. Some have peers who support innovation,
but others don’t. Some have many immediate opportunities to use their new
skills, but others have limited opportunities. Some teach primarily in the
classroom, others in the clinic. Due to this mix of factors, trainers may predict
considerable variation in transfer.
14 DOI: 10.1002/piq Performance Improvement Quarterly
Yelon et al. (2004) asserted that independent individuals decide to use or
not use what they are taught depending on their job experience, their self-
view, and their professional goals and motivations. Thus, to add to expecta-
tions for transfer, trainers may consider learners’ characteristics. For
example, military mechanic trainers may find that some trainees are excited
about their career field, but some wish they were in a different career field.
Some recruits have basic mechanic skills, but most have limited experience.
From tests, trainers can tell that some are fast learners while others will likely
struggle to gain the knowledge and skills needed to succeed on the job. Some
need close supervision on the job, while some are relatively independent.
These findings point to likely variations in transfer. Those highly motivated
recruits who learn more skills to a higher level of proficiency in training are
more likely to show transfer to the job.
The physician-teachers’ trainers find that some of the doctors have very
little experience as medical educators, but some have as much as 10 years.
Some doctors think they need no improvement in teaching, some think they
need much improvement, and some think they need improvement in one
particular area. These observations might lead trainers to expect consider-
able variation in what the physicians decide to apply on the job. Some may
choose to apply just one idea, some a few ideas, some many.
To form realistic expectations for transfer, trainers can also study aspects
of the training process. One feature to explore is the degree of consistency of
training design with the desired job performance (Cruz, 1997; Yelon, 1996;
Yelon & Berge, 1988). For example, with the mechanics’ training, instructors
provide detailed task steps, demonstrate those steps,require frequent practice
and provide precise corrective feedback. Based on this design strategy, one
might reasonably expect that they will be likely to learn and transfer.
In teaching the physicians how to teach procedural skills, the instructors
choose a simple, practical teaching approach. Instructors require each
physician to plan, teach, and receive feedback on two lessons to real students.
With such real world–oriented instruction about teaching psychomotor
skills, trainers might expect a high likelihood of transfer for those doctors
who teach skills.
With an integrated analysis of work conditions, worker characteristics,
and course features, trainers are in a better position to predict and to
understand the variation in transfer across individual workers and across
work locations. In many cases, due to varying job opportunities, differences
in trainee motivation and ability, differences inadequacy,and fit of training to
the job conditions and performance, trainers should expect a range of
transfer among trainees. The two cases illustrate the point that for some
training programs, near perfect transfer is an impossible task to achieve.
Further, specifying variables influencing transfer in a specific training course
can provide a foundation for setting rough but realistic expectations of the
extent to which trainees will use what they are taught.
Strategy 2: Define Realistic Transfer Goals and Create Meaningful Measures
of Performance. A transfer goal is a general description of job performance
Volume 24, Number 2 / 2011 DOI: 10.1002/piq 15
that would indicate transfer. A reasonable transfer goal would be a realistic
expectation that trainees will use a training objective considering work
conditions, trainee characteristics, and program features.
The translation from training objective to transfer goal could be
straightforward if the course designer derived training objectives from
desired job performance. For example, because military mechanics trouble-
shoot machine malfunctions, trainers aim for the training objective: ‘‘When
troubleshooting malfunctions, the mechanic will use the hydraulic test stand
to isolate and identify the source of the problem following the step-by-step
guide as outlined in the technical orders.’’ The transfer goal is clear—that is,
on the job, troubleshoot malfunctions according to protocol.
But is it a reasonable transfer goal? To convert this objective to a
reasonable transfer goal, trainers must weigh what they want, against what
they can get considering the forces interfering with transfer. This is where
trainers may apply their analysis of the job conditions, learner characteristics,
and features of training likely to influence transfer. Given this analysis,
trainers may come to the conclusion that it is realistic to expect this cohort of
newlytrained personnel will come to the job partially proficient, needing help
only with difficult problems. It is reasonable and acceptable to expect new
mechanics to ask coworkers questions; however, they must always follow
prescribed safety precautions. Although partial performance is reasonable at
first, full proficiency is areasonable expectation by 4 monthson the job. Thus,
two reasonable transfer goals might be: (1) on his/her own, a newly placed
mechanic will be able to narrow down a troubleshooting problem for a
mechanical malfunction to two or three likely factors, with no safety
violations, and will readily seek help, using correct terms and labels, to finish
the troubleshooting task; and (2) as a moderately experienced mechanic with
at least 4 months on assignment, given a mechanical malfunction, will
independently troubleshoot the problem with no technical errors and no
safety violations.
One of the training objectives for physicians learning to teach is ‘‘Write
and deliver a lesson whose elements are internally consistent and are directed
toward competence in medical practice.’’ Trainers know that even with time
constraints, it would be reasonable to expect most of the doctors to be
assigned a classroom talk within 3 months, for which they should be able to
write an internally consistent lesson clearly related to a physician’s perfor-
mance. Thus, a realistic transfer goal might read, ‘‘A physician teaching in a
classroom, within 3 months of training, will write an internally consistent
lesson clearly related to a physician’s performance.’’
After determining the transfer goal, trainers may consider describing
more exact, observable job actions—transfer objectives—that fit a general
description in the transfer goal. The transfer objective is the basis for
measuring and reporting transfer. The parts of a transfer objective, much
like the parts of a training objective, are conditions and behaviors, but these
conditions and behaviors do not describe assessment at the end of training;
they describe performance on the job, and may add ways to categorize and
count those behaviors.
16 DOI: 10.1002/piq Performance Improvement Quarterly
Of course, evaluators may find it more or less difficult to identify and
describe any use of what has been taught, depending on task features. On the
one hand, it may be relatively easy for an evaluator to recognize the use of
what is taught if a learner responds to a few clear cues with a simple,
observable procedure with obvious features, such as when trainees are taught
to perform skills that must be reproduced on the job exactly according to a
precise step-by-step protocol.
On the other hand, an evaluator may find it relatively difficult to identify
or observe the use of what is taught if, for example, on the job, a worker has to
create procedures based on principles learned, or a worker must adapt what
has been learned. For example, medical educators are taught to begin their
instruction by relating the subject matter to clinical practice. The educators
have to create new, meaningful, attention-getters to fit the subject matter and
available clinical practice.
In addition to describing the behaviors indicating use, practitioners may
consider stating their reasons for doing so. Suppose, for example, that one of
the transfer objectives for both mechanics and physician teachers is ‘‘After
performing the desired task on the job, to demonstrate that they used the
proper mental approach, workers will describe orally or in writing the
process they went through to complete the task.’’ The mechanics’ trainers
might ask what mental strategy mechanics used as the basis for their choice of
troubleshooting actions, while trainers of the medical teachers might ask the
physicians to describe how they thought through their lesson plan. Trainers
might add the explanation for their inclusion of this objective: ‘‘In those cases
where the use of an idea or skill is a mental process and is, therefore, invisible,
an observer has nothing to look for to detect transfer. Our solution is to ask
workers to state aloud or write the stories of their path to application.’’
In addition to specifying actions to measure of transfer, trainers may
identify concrete products as evidence of use. For example, the mechanics’
trainers could measure each individual’s performance by inspecting copies of
the supervisors’ reports for each task that indicates satisfactory performance
and amount of help needed. Physicians’ trainers could ask doctors to send in
lesson plans, the materials they created, and any student ratings.
To broaden their view of use, the physicians’ trainers could capture
additional expressions of transfer by identifying different behaviors in
alternate contexts indicating application. For example, although a medical
educator trainee did not have an opportunity to present a classroom lesson
within three months, she coached a colleague planning a lesson, and asked
the program’s curriculum committee pertinent questions about objectives.
These alternate uses of learned principles might be legitimate indicators of
use and transfer objectives and measures.
As trainers create transfer objectives and measures, they should bear in
mind that although the amount and quality of effort expended in measuring
transfer depends primarily on the importance of the task, it also depends on
the number of trainees and the resources available to use data-gathering
methods such as interviewing or observing. For example, for a moderately
important task, if there were relatively few trainees and relatively many
Volume 24, Number 2 / 2011 DOI: 10.1002/piq 17
supervisors, an evaluator could justify use of more time-intensive techniques
on more workers. However, if there were many trainees and few evaluation
staff, then they might have to use timesaving methods, such as surveys or
written reports for the whole group. Keep in mind, though, that there is some
evidence that trainees’ and managers’ survey measures have been found to be
overestimates of skill transfer (Chiaburu, Sawyer, & Thoroughgood, 2010).
Strategy 3: Determine the Evidence That Would Convince You That an
Adequate Amount of Transfer is Taking Place. To set a standard for success,
ask: ‘‘What amount of use would convince us that training was successful
considering the circumstances—the task, context, trainees, and program?’’
Note that the question calls for two responses: How much performance we
would like, and how much performance is likely.
To state the results they would like, trainers decide for each transfer
objective the number of trainees they want to see performing the task on the
job according to the criteria set, a specific number of times, within a set time
period. For example, medical educators might decide success to be 70% of the
physician trainees meeting 90% of the elements of an internally consistent,
real world–oriented lesson plan, at least once, within the first three months
after training.
To decide how much performance is likely to transfer, trainers take into
account task features, worker characteristics, work conditions, and training
qualities. If trainers find that some of these factors are likely to be detrimental
to transfer, they could reduce the level of their standard. They may do so if the
task is complex, requires many prerequisites, takes time to develop, and
demands adaptation and creativity. They could do so if they are teaching a
controversial or new method or way to enhance a known skill. They could
reduce their estimate when learners are not motivated to master the material
and are free to decide which ideas to adopt. They could do so if, on the job,
there is pressure to maintain the status quo, few physical resources, and little
opportunity or time to apply new ideas. Trainers could also ease their
standard if they realize training is not optimal for the task.
Thus, trainers might be satisfied with partial proficiency for, say, half of
the trainees in a program where an optional skill is taught and where
participants come from varying job conditions and have varying needs. In
contrast, a trainer might want and expect 90% of the trainees to show they
could flawlessly perform a process within a month of returning to work when
well-prepared trainees with similar needs coming from similar work
arrangements learn a frequently used, basic process involving commonly
available resources. The purpose is not to set a low, medium, or high
standard. The goal is to set a realistic, responsible criterion for each transfer
objective, based on a judicious analysis of variables that may affect transfer in
a specific training program.
Strategy 4: The Report Contains the Complete Transfer Story. What
evaluators report about transfer and how they do so could differ depending
on the purpose of the report, the nature of the desired performance, and the
18 DOI: 10.1002/piq Performance Improvement Quarterly
scope and importance of the training. Consider each of these five features of a
complete transfer report.
The first feature of a complete transfer report is that it fits the purpose.
The content of a transfer report depends on its purpose, and its purpose
depends on its audience. Typically there are two audiences: professionals
within an organization (those whodesigninstruction, teach courses, evaluate
outcomes, and manage logistics) and professionals outside an organization
(those who study transfer, design instruction, and promote application).
Training practitioners want enlightening reports to assess the overall quality
of the instruction and to be able to improve job performance by improving
design and by influencing contributing factors, such as psychological sup-
port at work. In contrast, researchers want reliable reports that may
substantiate or refute general principles of transfer. They want examples
to use to motivate others to promote transfer and to advise clients regarding
best practices.
When revealing summative results, reporters could consider including
not only the empirical results of how many workers applied specific ideas
back on the job, but also the transfer goals, transfer objectives, and the extent
of transfer deemed to define success. This information is based on the data
gathered during the needs assessment phase of the instructional systems
model (Goldstein & Ford, 2002) and then can serve as benchmarks for the
empirical results. They help answer the question: Have reasonable expecta-
tions been met? In addition, for formative purposes, and to reveal possible
influences on transfer, reporters could describe any evidence that major
factors, such as supervisor support or practice opportunities on the job,
affected transfer.
A second feature is that the transfer reports vary depending on type of
performance. Because some tasks are relatively easy to count, they are easier
to report as frequencies, percentages, and averages. When a worker performs
a uniform, clearly observable task with a standard, discernible product,
evaluators could count checked items on a list of recognizable steps or
features. Then they could count the number of errors, adherence to time
limits, and repetitions of desired performance. Reporters could then state
that a worker has, on three occasions, properly performed nine of the ten
steps, and has created twenty products with seven of the eight desired
features, as specified in the transfer goal. They could combine individuals’
scores and report, ‘‘Half of the trainees met the minimum desired.’’ Further,
they could report common errors in quantitative form: ‘‘More than 75% of
trainees performed step six improperly.’’
However, trainers could use other approaches when reporting on tasks
that are more difficult to quantify. For a skill that can be done well in several
ways, evaluators could employ a checklist of rules and principles for
acceptable adaptations. Evaluators could then report the number of accep-
table and unacceptable performance variations, actions adhering to general
principles, and repetitions of acceptable performance. Evaluators could
report that a worker has, on five occasions, properly performed three
acceptable variations of the skill adhering to eight of 10 principles on the
Volume 24, Number 2 / 2011 DOI: 10.1002/piq 19
checklist noted in the transfer goal, and has served three clients who report
adequate satisfaction. They could report combinations of individual’s scores,
for example, ‘‘Half of the trainees met the minimum desired performance
using at least eight of the 10 principles on the checklist.’’
For mental skills, evaluators could tell collected
stories about what workers are thinking as they are
asked to apply ideas learned. Reporters could categor-
ize relevant themes and count entries such as common
behaviors, conditions, and consequences (see Yelon et
al., 2004, for an example). Even when results could be
stated as percentages and averages, reporters could add
alternative measures such as application stories. By
using multiple methods, reporters can enhance the
validity of results (Cook, 1985). When reporters show
how stories and numbers corroborate each other, results are more credible.
In contrast, discrepancies can lead researchers to delve into phenomena and
create fresh explanations (McGrath & Johnson, 2003).
A third feature is that a complete report includes context. Context makes
a difference in meaning. Even when performance is relatively easy to
quantify, readers gain understanding from information about context. For
example, suppose we find that half the workers applied a newly learned skill,
but we also know that compared with other workers, their work environ-
ments matched the practice conditions more precisely and provided more
frequent opportunities for use. Now we have clues as to what may have
happened. If there was a disappointingly low amount of transfer, but we also
found that the instruction was delivered by an inexperienced trainer using a
brand-new training technology with unmonitored practice, then we might
be able to explain, not excuse, the lack of transfer. Consider reporting facts
about contextual variables influencing transfer as noted in Table 1.
A fourth feature is that the report’s quality and length depends on the
importance of job performance. The more important the performance, the
more likely reporters will need to write exacting, precise, and lengthy reports.
An important performance might be one whose outcome is believed to be of
great potential value—or to be a considerable risk and cost a great deal to
produce. For an important task, evaluators could obtain considerable
resources and expend extraordinary effort to get all the useful and mean-
ingful transfer information possible. If evaluators put forth all that effort, they
should report all their findings.
A final feature of a complete transfer report draws conclusions that
illustrate the whole complex transfer story. Regardless of relative importance,
a transfer report should tell the whole story. The report should identify the
plans for assessing transfer. We have suggested including the work, training,
andindividualvariables thought to potentially influence transfer, the realistic
transfer goals, the transfer objectives and measures, and the expected extent
of transfer. Then the transfer results could be reported in light of the plans:
what was actually observed, categorized, and counted; how well those
measures captured the essence of the transfer goal; the difficulties in
The lesson we learned
along the way to
understanding transfer is
to be wary of and to
question all-encompassing
statements that represent
conventional wisdom.
20 DOI: 10.1002/piq Performance Improvement Quarterly
observing, categorizing, and counting; the number or proportion of trainees
who acted appropriately under the conditions specified within the time
desired; how well those results matched the success standard; in retrospect,
how realistic the goal was; and evidence that the case-specific variables may
have influenced transfer. Then, we would urge reporters to judiciously state
what general lessons readers can take away from the investigation, qualifying
the statements in terms of what the task was, who the learners were, what the
training was like, and what the job conditions were, so that readers can
appreciate this particular instance of the process of transfer in all its
complexity and variation.
Concluding Comments
We began with a cautionary tale. Researchers and practitioners have
made the assumption that the extent of transfer is very low, possibly because
TABLE 1 CONTEXTUAL VARIABLES INFLUENCING TRANSFER
1. Training Information—What were the stated training plans?
What actually happened during training?
What were the differences in training sessions for different groups of workers?
How close were the job conditions to the training practice and test conditions?
How much demonstration and practice of varying conditions were present in training?
Were the workers forced to attend training or did they attend because they wanted to learn?
If training used new technology and self-instruction, how well were trainees able to accommodate to
the new instruction?
2. Work Conditions—What factors at work supported and inhibited transfer?
What were pertinent work conditions at various locations?
Did the worker know that behavior was expected at work?
Was the performer accountable to others for the task?
How much and what sort of psychological support did workers get to do the performance?
What material resources were available on the job to support the performance?
Was this task frequently called for on the job?
Did the performance yield natural observable feedback?
How soon and how frequently was performance checked?
3. Individual Characteristics—What individual characteristics helped or hindered performance?
What was different about the workers who succeeded and those who didn’t?
What were contributing previous experiences and skills?
Did the workers have some physical feature that aided their performance?
Volume 24, Number 2 / 2011 DOI: 10.1002/piq 21
of its repeated assertion rather than because of its empirical support. Because
of this assertion’s sticky nature, it appears that most professionals accepted
and promulgated the notion without a careful, critical search for the evidence
for the idea.
We saw this phenomenon as an opportunity to reflect upon the nature of
transfer. After exploring the assumptions underlying the 10% statement, we
suggested strategies for planning and reporting transfer, such as investigat-
ing and accounting for variables influencing transfer, expanding the defini-
tion of use, stating realistic transfer goals, creating specific transfer
objectives, describing observable indicators as measures of use, setting
quantitative standards of successful transfer, and, finally, reporting the
complete transfer story.
The lesson we learned along the way to understanding transfer is to be
wary of and to question all-encompassing statements that represent con-
ventional wisdom. Further, we have learned to search for convincing
evidence; critically analyze assumptions behind generalizations; and plan,
assess, and report transfer in a thoughtful manner. By testing the utility of the
strategies mentioned, our hope is that researchers and practitioners can
move closer to understanding the true nature of transfer and prevent
dissemination of inappropriate generalizations about transfer, as was the
case with the 10% delusion.
References
Aik, C. T., & Tway, D. C. (2005, January). On the job training: An overview and an appraisal.
Proceedings of the International Conference of Applied Management and
Decision Sciences (AMDS 2005), Athens, GA.
American Society of Training and Development. (2010). 2010 state of the industry report.
Alexandria, VA: ASTD Press.
Baldwin, T. T., & Ford, J. K. (1988). Transfer of training: A review and directions for future
research. Personnel Psychology, 41, 63–105.
Baldwin, T. T., Ford, J. K., & Blume, B. D. (2009). Transfer of training 1988–2008: An updated
review and new agenda for future research. In G. P. Hodgkinson & J. K. Ford (Eds.),
International review of industrial and organizational psychology (Vol. 24, pp. 41–70).
Chichester, UK: Wiley.
Barnett, S. M., & Ceci, S. J. (2002). When and wheredo we apply what we learn? A taxonomy
for far transfer. Psychological Bulletin, 128, 612–637.
Blume, B. D., Ford, J. K., Baldwin, T. T., & Huang, J. (2010). Transfer of training: A meta-
analytic review. Journal of Management, 20, 1–41.
Broad, M. L., & Newstrom, J. W. (1992). Transfer of training: Action-packed strategies to
ensure high payoff from training investments. Cambridge, MA: Perseus Publishing.
Brown, T. C. (2005). Effectiveness of distal and proximal goals as transfer-of-training
interventions: A field experiment. Human Resource Development Quarterly, 16,
369–387.
Burke, L., & Hutchins, H. M. (2007). Training transfer: An integrative literature review.
Human Resource Development Review, 6, 263–296.
Chiaburu, D. S., Sawyer, K., & Thoroughgood, C. (2010, April). Transferring more than
learned in training? Employees’ and managers’ estimation of untrained training
content. Presented at the 25th annual conference of the Society for Industrial and
Organizational Psychology, Atlanta, GA.
22 DOI: 10.1002/piq Performance Improvement Quarterly
Cook, T. (1985). Postpositivist critical multiplism. In R. Shotland & M. Mark (Eds.), Social
science and social policy (pp. 25–62). Beverly Hills, CA: Sage.
Cruz, B. J. (1997). Measuring the transfer of training. Performance Improvement Quarterly,
10, 83–97.
Elangovan, A. R., & Karakowsky, L. (1999). The role of trainee and environmental factors in
transfer of training: An exploratory framework. Leadership & Organizational Devel-
opment Journal, 20, 1–9.
Fitzpatrick, R. (2001). The strange case of the transfer of training estimate. The Industrial
and Organizational Psychologist, 39, 18–19.
Ford, J.K., Quin
˜
ones, M.A., Sego, D. J.,& Sorra, J.S. (1992). Factors affectingthe opportunity
to perform trained tasks on the job. Personnel Psychology, 45(3), 511–527.
Ford, J. K., & Schmidt, A. M. (2000). Emergency preparedness training: Strategies for
enhancing real-world performance. Journal of Hazardous Materials, 75, 195–215.
Georgensen, D. L. (1982). The problem of transfer calls for partnership. Training and
Development Journal, 36, 75–78.
Goldstein, I., & Ford, J. K. (2002). Training in organizations (4th ed.). Belmont, CA: Wads-
worth.
Heath,C., & Heath,D. (2007). Madeto stick: Whysome ideas surviveand others die.New York,
NY: Random House.
Kraiger, K., Ford, J. K., & Salas, E. (1993). Application of cognitive, skill-based, and affective
theories of learning outcomes to new methods of training evaluation. Journal of
Applied Psychology, 78, 311–328.
Lim, D.H., & Morris,M. (2006). Influence of traineecharacteristics, instructionalsatisfaction,
and organizational climate on perceived learning and training transfer. Human
Resources Development Quarterly, 17, 85–115.
McGrath, J.E., & Johnson, B. A. (2003). Methodologymakes meaning: How bothqualitative
and quantitative paradigms shape evidence. In P. M. Camic, J. E. Rhodes, & L. Yardley
(Eds.), Qualitative research in psychology: Expanding perspectives in methodology and
design. Washington, DC: American Psychological Association.
Saks, A. M. (2002). So what is a good transfer of training estimate? A reply to Fitzpatrick.
Industrial-Organizational Psychologist, 29, 29–30.
Yelon, S. L. (1996). Powerful principles of instruction. New York, NY: Addison Wesley/
Longman.
Yelon, S. L., & Berge, Z. L. (1988). The secret of instructional design. Performance and
Instruction Journal, 27, 11–13.
Yelon, S. L., & Ford, J. K. (1999). Pursuing a multidimensional view of transfer. Performance
Improvement Quarterly, 12, 58–78.
Yelon, S. L., Sheppard, L., Sleight, D., & Ford, J. K. (2004). Intentions to transfer: How
autonomous professionals become motivated to use trained skills. Performance
Improvement Quarterly, 17, 82–103.
J. KEVIN FORD
J. Kevin Ford, PhD, is a professor of psychology at Michigan State
University. His research interests focus on improving training effectiveness.
He is also a fellow of the American Psychological Association. More
information can be found at www.io.psy.msu.edu/jkf. Mailing address: 315
Psychology Building, Michigan State University, East Lansing, MI
48824–1116. E-mail: fordjk@msu.edu
Volume 24, Number 2 / 2011 DOI: 10.1002/piq 23
STEPHEN L. YELON
Stephen L. Yelon, PhD, is a professor emeritus and a consultant to faculty
at Michigan State University, a faculty member of the Primary Care Faculty
Development Program, and investigator of the dynamics of transfer of
training. Mailing address: A214 East Fee Hall, Michigan State University,
East Lansing, MI 48824. E-mail: yelons@gmail.com
ABIGAIL Q. BILLINGTON
Abigail Q. Billington is a doctoral candidate in organizational psychology
at Michigan State University and a research fellow at the U.S. Army Research
Institute. Her main research interest is training transfer. Mailing address:
1233 Regatta Street, Apt, 302, Fayetteville, NC 28301. E-mail: billinga@
msu.edu
24 DOI: 10.1002/piq Performance Improvement Quarterly